1. 程式人生 > >基於深度學習的視訊檢測(三) 目標跟蹤

基於深度學習的視訊檢測(三) 目標跟蹤

搭建環境

基於 darkflow (yolo-v2)和 sort/deep_sort 實現目標檢測與追蹤

安裝依賴

#sort
$sudo pip install numba
$sudo pip install matplotlib
$sudo apt-get install python-tk
$sudo pip install scikit-image
$sudo pip install filterpy
$sudo pip install sklean

#dark_sort
$sudo pip install Cython
$sudo pip install scipy
$sudo
pip install sklean

配置環境

$git clone https://github.com/bendidi/Tracking-with-darkflow.git
$git submodule update --init --recursive
$cd darkflow/
$python setup.py build_ext --inplace

在darkflow下新建bin目錄

$mkdir bin
$cd bin
$wget https://pjreddie.com/media/files/yolo.weights #獲取yolo-v2訓練好的權重用於檢測

你可以在 yolo 官網

下載配置檔案和訓練權重。

$cd ../..
$cd darksort/

下載 resource壓縮包並解壓至 darksort/

配置引數

from darkflow.darkflow.defaults import argHandler #Import the default arguments
import os
from darkflow.darkflow.net.build import TFNet


FLAGS = argHandler()
FLAGS.setDefaults()

FLAGS.demo = "camera" # 你需要檢測的視訊檔案 預設為你的攝像頭 "camera"
FLAGS.model = "darkflow/cfg/yolo.cfg" # tensorflow model FLAGS.load = "darkflow/bin/yolo.weights" # tensorflow weights # FLAGS.pbLoad = "tiny-yolo-voc-traffic.pb" # tensorflow model # FLAGS.metaLoad = "tiny-yolo-voc-traffic.meta" # tensorflow weights FLAGS.threshold = 0.7 # threshold of decetion confidance (detection if confidance > threshold ) FLAGS.gpu = 0.8 #how much of the GPU to use (between 0 and 1) 0 means use cpu FLAGS.track = True # 置為 True 表示啟用目標追蹤, False 表示僅啟用目標檢測 #FLAGS.trackObj = ['Bicyclist','Pedestrian','Skateboarder','Cart','Car','Bus'] # the object to be tracked FLAGS.trackObj = ["person"] FLAGS.saveVideo = True #是否儲存檢測後的視訊檔案到當前目錄 FLAGS.BK_MOG = True # activate background substraction using cv2 MOG substraction, #to help in worst case scenarion when YOLO cannor predict(able to detect mouvement, it's not ideal but well) # helps only when number of detection < 3, as it is still better than no detection. FLAGS.tracker = "deep_sort" # wich algorithm to use for tracking deep_sort/sort (NOTE : deep_sort only trained for people detection ) FLAGS.skip = 0 # how many frames to skipp between each detection to speed up the network FLAGS.csv = False #whether to write csv file or not(only when tracking is set to True) FLAGS.display = True # display the tracking or not tfnet = TFNet(FLAGS) tfnet.camera() exit('Demo stopped, exit.')

如果你沒有用於測試的視訊檔案,你可以獲取 MOT資料集 並下載到本地。

開始

$python run.py

這裡寫圖片描述

如果沒有出現錯誤,你可以順利完成對自己視訊檔案中的行人追蹤。

下一節講解 darkflow 和 sort,deep_sort。

基於深度學習的視訊檢測(四)