1. 程式人生 > >機器人視覺專案:視覺檢測識別+機器人跟隨(1)

機器人視覺專案:視覺檢測識別+機器人跟隨(1)

更新一波暑假做的機器人視覺檢測跟隨的專案,有一些筆記都放在部落格中,有需要的可以交流~

專案的目的是在機器人上搭建視覺檢測系統,Kinect+ros+深度學習目標檢測+行人識別(opencv的SVM行人檢測以及Darknet神經網路Yolov3-tiny)+目標跟蹤演算法(主要是濾波演算法)

下面開始更新啦...

目標:在TX2上執行人體檢測和跟蹤演算法

1. 刷機(安裝jetpack)

2. 測試

測試cuda

/home/ubuntu/NVIDIA_CUDA-<version>_Samples/bin/armv7l/linux/release/oceanFFT

測試Multimedia API

[email protected]:~/tegra_multimedia_api/samples/backend$ ./backend 1 ../../data/Video/sample_outdoor_car_1080p_10fps.h264 H264 --trt-deployfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt --trt-modelfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.caffemodel --trt-forcefp32 0 --trt-proc-interval 1 -fps 10

總提示輸入引數不對,BUG!!!

後面有需要再來測這個

3. 編譯安裝opencv3.1

有點費時間,替代方案:sudo apt-get install libopencv-dev)

4. 測試

5. 安裝MXNet

pip install mxnet-jetson-tx2 ???

6. 測試

TODO

7. 安裝tensorflow

8. 測試

TODO

  • Download *.tar.gz from nvidia website

  • tar -zxvf *.tar.gz

  • set PATH, LD_LIBRARY_PATH, TENSORRT_INC_DIR, TENSORRT_LIB_DIR

  • pip install tensorrt-**.whl in python directory

  • pip install uff-*.whl in uff directory

  • pip install numpy -U

閱讀下面的坑

  • If you are installing TensorRT from a tar package (instead of using the .deb packages and apt-get), you will need to update the custom_plugins example to point to the location that the tar package was installed into. For example, in the <PYTHON_INSTALL_PATH>/tensorrt/examples/custom_layers/tensorrtplugins/setup.py file change the following: Change TENSORRT_INC_DIR to point to the <TAR_INSTALL_ROOT>/include directory. Change TENSORRT_LIB_DIR to point to <TAR_INSTALL_ROOT>/lib64 directory.

The PyTorch based sample will not work with the CUDA 9 Toolkit. It will only work with the CUDA 8 Toolkit.

When using the TensorRT APIs from Python, import the tensorflow and uff modules before importing the tensorrt module. This is required to avoid a potential namespace conflict with the protobuf library as well as the cuDNN version. In a future update, the modules will be fixed to allow the loading of these Python modules to be in an arbitrary order.

The TensorRT Python APIs are only supported on x86 based systems. Some installation packages for ARM based systems may contain Python .whl files. Do not install these on the ARM systems, as they will not function.

The TensorRT product version is incremented from 2.1 to 3.0.1 because we added major new functionality to the product. The libnvinfer package version number was incremented from 3.0.2 to 4.0 because we made non-backward compatible changes to the application programming interface.

The TensorRT debian package name was simplified in this release to tensorrt. In previous releases, the product version was used as a suffix, for example tensorrt-2.1.2.

If you have trouble installing the TensorRT Python modules on Ubuntu 14.04, refer to the steps on installing swig to resolve the issue. For installation instructions, see Unix Installation.

The Flatten layer can only be placed in front of the Fully Connected layer. This means that the Flatten layer can only be used if its output is directly fed to a Fully Connected layer.

The Squeeze layer only implements the binary squeeze (removing specific size 1 dimensions). The batch dimension cannot be removed.

If you see the Numpy.core.multiarray failed to import error message, upgrade your NumPy to version 1.13.0 or greater.

For Ubuntu 14.04, use pip version >= 9.0.1 to get all the dependencies installed.

10. 測試

自帶測試樣例

在1080的機器上安裝了TensorRT,測試sampleFasterRCNN需要下載模型檔案,這個需要翻牆 !-_-!

onnx test

TODO

nvcaffe test

TODO

這個問題的關鍵在於,嵌入式裝置的運算速度和演算法精度的權衡。 首先要對TX2能夠執行多少層網路,有一個定性的認識。