機器人視覺專案:視覺檢測識別+機器人跟隨(20)
1.嘗試用tx2外接kinectv1深度相機獲取RGBD影象資訊,傳入到我們的行人檢測程式碼框架中,除錯基於深度相機的行人檢測與跟蹤演算法的效果。 首先安裝kinect對應Arm處理器的ubuntu驅動程式, libfreenect v2.0 OpenNI V2.2.0.33 Nite V2.0.0 安裝libfreenect sudo apt-get install git g++ cmake libxi-dev libxmu-dev libusb-1.0-0-dev pkg-config freeglut3-dev build-essential 安裝libfreenect git clone https://github.com/OpenKinect/libfreenect.git cd libfreenect mkdir build; cd build cmake .. -DBUILD_OPENNI2_DRIVER=ON make 安裝OpenNI2 cd OpenNI-Linux-x86-2.2/ sudo ./install.sh source OpenNIDevEnvironment cp ~/libfreenect/platform/linux/udev/51-kinect.rules /etc/udev/rules.d/
cp ~/libfreenect/build/lib/OpenNI2-FreenectDriver/libFreenectDriver.so OpenNI-Linux-x86-2.2/Redist/OpenNI2/Drivers/ cp ~/libfreenect/build/lib/OpenNI2-FreenectDriver/libFreenectDriver.so OpenNI-Linux-x86-2.2/Tools/OpenNI2/Drivers/
lsusb
Bus 002 Device 006: ID 045e:02ae Microsoft Corp. Xbox NUI Camera Bus 002 Device 004: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor Bus 002 Device 005: ID 045e:02ad Microsoft Corp. Xbox NUI Audio cd OpenNI-Linux-x86-2.2/Tools/ ***以上步驟都可以在tx2中正常操作,接下來的測試步驟 ./NiViewer 沒有看到預期的輸出RBG影象和深度影象 後面還要安裝Nite2 由於出現問題,沒有繼續下去,這裡也把教程放上來: cd NiTE-2.0.0/ sudo ./install.sh source NiTEDevEnvironment cp ~/libfreenect/build/lib/OpenNI2-FreenectDriver/libFreenectDriver.so NiTE-2.0.0/Samples/Bin/OpenNI2/Drivers/ 拷貝OpenNI庫到執行sample的目錄,因為Nite依賴於OpenNI:cp OpenNI-Linux-x86-2.2/Redist/libOpenNI2.so NiTE-2.0.0/Samples/Bin cd NiTE-2.0.0/Samples/Bin/ 驗證tracking功能 ./UserViewer 2.由於沒有能安裝成功kinect的驅動到tx2上,嘗試使用小強機器人自帶的攝像頭檢測,普通攝像頭直接呼叫opencv攝像頭測試程式即可驗證, 這裡吧程式碼放上來:
# import cv2 # import numpy as np # # # cap = cv2.VideoCapture(0) # fourcc = cv2.cv.CV_FOURCC(*'XVID') # #opencv3的話用:fourcc = cv2.VideoWriter_fourcc(*'XVID') # out = cv2.VideoWriter('output.avi',fourcc,20.0,(640,480))#儲存視訊 # while True: # ret,frame = cap.read() # gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY) # out.write(frame)#寫入視訊 # cv2.imshow('frame',frame)#一個視窗用以顯示原視訊 # cv2.imshow('gray',gray)#另一視窗顯示處理視訊 # # # if cv2.waitKey(1) &0xFF == ord('q'): # break # # cap.release() # out.release() # cv2.destroyAllWindows() import cv2 import numpy as np cap = cv2.VideoCapture(1) while(1): # get a frame ret, frame = cap.read() # show a frame cv2.imshow("capture", frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() # ********************
程式碼是可行的,但是呼叫小強攝像頭出錯,沒有辦法強行呼叫,這裡採用普通的外接攝像頭進行測試。
使用外接的攝像頭,可能進行行人識別,從效果上看比較不錯,不過也會有個別情況下出現丟失目標。