1. 程式人生 > >Python 3 利用 Dlib 19.7 實現攝像頭人臉檢測特征點標定

Python 3 利用 Dlib 19.7 實現攝像頭人臉檢測特征點標定

backend 開發 img com inter 適合 github red for

0.引言

   利用python開發,借助Dlib庫捕獲攝像頭中的人臉,進行實時特征點標定;

    技術分享圖片

      圖1 工程效果示例(gif)

  技術分享圖片

      圖2 工程效果示例(靜態圖片)

   (實現比較簡單,代碼量也比較少,適合入門或者興趣學習。)

1.開發環境

  python:  3.6.3

  dlib:    19.7

  OpenCv, numpy

1 import dlib         # 人臉識別的庫dlib
2 import numpy as np  # 數據處理的庫numpy
3 import cv2          # 圖像處理的庫OpenCv

2.源碼介紹

  其實實現很簡單,主要分為兩個部分:攝像頭調用+人臉特征點標定

2.1 攝像頭調用

  介紹下opencv中攝像頭的調用方法;

  利用 cap = cv2.VideoCapture(0) 創建一個對象;

  (具體可以參考官方文檔:https://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html)

 1 # 2018-2-26
 2 # By TimeStamp
 3 # cnblogs: http://www.cnblogs.com/AdaminXie
 4 
 5 """
 6 cv2.VideoCapture(), 創建cv2攝像頭對象/ open the default camera
7 8 Python: cv2.VideoCapture() → <VideoCapture object> 9 10 Python: cv2.VideoCapture(filename) → <VideoCapture object> 11 filename – name of the opened video file (eg. video.avi) or image sequence (eg. img_%02d.jpg, which will read samples like img_00.jpg, img_01.jpg, img_02.jpg, ...)
12 13 Python: cv2.VideoCapture(device) → <VideoCapture object> 14 device – id of the opened video capturing device (i.e. a camera index). If there is a single camera connected, just pass 0. 15 16 """ 17 cap = cv2.VideoCapture(0) 18 19 20 """ 21 cv2.VideoCapture.set(propId, value),設置視頻參數; 22 23 propId: 24 CV_CAP_PROP_POS_MSEC Current position of the video file in milliseconds. 25 CV_CAP_PROP_POS_FRAMES 0-based index of the frame to be decoded/captured next. 26 CV_CAP_PROP_POS_AVI_RATIO Relative position of the video file: 0 - start of the film, 1 - end of the film. 27 CV_CAP_PROP_FRAME_WIDTH Width of the frames in the video stream. 28 CV_CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream. 29 CV_CAP_PROP_FPS Frame rate. 30 CV_CAP_PROP_FOURCC 4-character code of codec. 31 CV_CAP_PROP_FRAME_COUNT Number of frames in the video file. 32 CV_CAP_PROP_FORMAT Format of the Mat objects returned by retrieve() . 33 CV_CAP_PROP_MODE Backend-specific value indicating the current capture mode. 34 CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras). 35 CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras). 36 CV_CAP_PROP_SATURATION Saturation of the image (only for cameras). 37 CV_CAP_PROP_HUE Hue of the image (only for cameras). 38 CV_CAP_PROP_GAIN Gain of the image (only for cameras). 39 CV_CAP_PROP_EXPOSURE Exposure (only for cameras). 40 CV_CAP_PROP_CONVERT_RGB Boolean flags indicating whether images should be converted to RGB. 41 CV_CAP_PROP_WHITE_BALANCE_U The U value of the whitebalance setting (note: only supported by DC1394 v 2.x backend currently) 42 CV_CAP_PROP_WHITE_BALANCE_V The V value of the whitebalance setting (note: only supported by DC1394 v 2.x backend currently) 43 CV_CAP_PROP_RECTIFICATION Rectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently) 44 CV_CAP_PROP_ISO_SPEED The ISO speed of the camera (note: only supported by DC1394 v 2.x backend currently) 45 CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently) 46 47 value: 設置的參數值/ Value of the property 48 """ 49 cap.set(3, 480) 50 51 """ 52 cv2.VideoCapture.isOpened(), 檢查攝像頭初始化是否成功 / check if we succeeded 53 返回true或false 54 """ 55 cap.isOpened() 56 57 """ 58 cv2.VideoCapture.read([imgage]) -> retval,image, 讀取視頻 / Grabs, decodes and returns the next video frame 59 返回兩個值: 60 一個是布爾值true/false,用來判斷讀取視頻是否成功/是否到視頻末尾 61 圖像對象,圖像的三維矩陣 62 """ 63 flag, im_rd = cap.read()

2.2 人臉特征點標定

  調用預測器“shape_predictor_68_face_landmarks.dat”進行68點標定,這是dlib訓練好的模型,可以直接調用進行人臉68個人臉特征點的標定;

  具體可以參考我的另一篇博客(http://www.cnblogs.com/AdaminXie/p/8137580.html);

2.3 源碼

  實現的方法比較簡單:

    利用 cv2.VideoCapture() 創建攝像頭對象,然後利用 flag, im_rd = cv2.VideoCapture.read() 讀取攝像頭視頻,im_rd就是視頻中的一幀幀圖像;

    然後就類似於單張圖像進行人臉檢測,對這一幀幀的圖像im_rd利用dlib進行特征點標定,然後繪制特征點;

    你可以按下s鍵來獲取當前截圖,或者按下q鍵來退出攝像頭;

 1 # 2018-2-26
 2 # By TimeStamp
 3 # cnblogs: http://www.cnblogs.com/AdaminXie
 4 # github: https://github.com/coneypo/Dlib_face_detection_from_camera
 5 
 6 import dlib                     #人臉識別的庫dlib
 7 import numpy as np              #數據處理的庫numpy
 8 import cv2                      #圖像處理的庫OpenCv
 9 
10 # dlib預測器
11 detector = dlib.get_frontal_face_detector()
12 predictor = dlib.shape_predictor(shape_predictor_68_face_landmarks.dat)
13 
14 # 創建cv2攝像頭對象
15 cap = cv2.VideoCapture(0)
16 
17 # cap.set(propId, value)
18 # 設置視頻參數,propId設置的視頻參數,value設置的參數值
19 cap.set(3, 480)
20 
21 # 截圖screenshoot的計數器
22 cnt = 0
23 
24 # cap.isOpened() 返回true/false 檢查初始化是否成功
25 while(cap.isOpened()):
26 
27     # cap.read()
28     # 返回兩個值:
29     #    一個布爾值true/false,用來判斷讀取視頻是否成功/是否到視頻末尾
30     #    圖像對象,圖像的三維矩陣
31     flag, im_rd = cap.read()
32 
33     # 每幀數據延時1ms,延時為0讀取的是靜態幀
34     k = cv2.waitKey(1)
35 
36     # 取灰度
37     img_gray = cv2.cvtColor(im_rd, cv2.COLOR_RGB2GRAY)
38 
39     # 人臉數rects
40     rects = detector(img_gray, 0)
41 
42     #print(len(rects))
43 
44     # 待會要寫的字體
45     font = cv2.FONT_HERSHEY_SIMPLEX
46 
47     # 標68個點
48     if(len(rects)!=0):
49         # 檢測到人臉
50         for i in range(len(rects)):
51             landmarks = np.matrix([[p.x, p.y] for p in predictor(im_rd, rects[i]).parts()])
52 
53             for idx, point in enumerate(landmarks):
54                 # 68點的坐標
55                 pos = (point[0, 0], point[0, 1])
56 
57                 # 利用cv2.circle給每個特征點畫一個圈,共68個
58                 cv2.circle(im_rd, pos, 2, color=(0, 255, 0))
59 
60                 # 利用cv2.putText輸出1-68
61                 cv2.putText(im_rd, str(idx + 1), pos, font, 0.2, (0, 0, 255), 1, cv2.LINE_AA)
62         cv2.putText(im_rd, "faces: "+str(len(rects)), (20,50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)
63     else:
64         # 沒有檢測到人臉
65         cv2.putText(im_rd, "no face", (20, 50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)
66 
67     # 添加說明
68     im_rd = cv2.putText(im_rd, "s: screenshot", (20, 400), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
69     im_rd = cv2.putText(im_rd, "q: quit", (20, 450), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
70 
71     # 按下s鍵保存
72     if (k == ord(s)):
73         cnt+=1
74         cv2.imwrite("screenshoot"+str(cnt)+".jpg", im_rd)
75 
76     # 按下q鍵退出
77     if(k==ord(q)):
78         break
79 
80     # 窗口顯示
81     cv2.imshow("camera", im_rd)
82 
83 # 釋放攝像頭
84 cap.release()
85 
86 # 刪除建立的窗口
87 cv2.destroyAllWindows()

# 請尊重他人勞動成果,轉載或者使用源碼請註明出處:http://www.cnblogs.com/AdaminXie

# 如果對您有幫助,歡迎在GitHub上star本項目: https://github.com/coneypo/Dlib_face_detection_from_camera

Python 3 利用 Dlib 19.7 實現攝像頭人臉檢測特征點標定