0. 引言
利用 python 开发, 借助 Dlib 库捕获摄像头中的人脸, 进行实时特征点标定;
图 1 工程效果示例 (gif)
图 2 工程效果示例 (静态图片)
(实现比较简单, 代码量也比较少, 适合入门或者兴趣学习)
1. 开发环境
- python:3.6.3
- dlib: 19.7
- OpenCv, numpy
- import dlib # 人脸识别的库 dlib
- import numpy as np # 数据处理的库 numpy
- import cv2 # 图像处理的库 OpenCv
2. 源码介绍
其实实现很简单, 主要分为两个部分: 摄像头调用 + 人脸特征点标定
2.1 摄像头调用
介绍下 opencv 中摄像头的调用方法;
利用 cap = cv2.VideoCapture(0) 创建一个对象;
(具体可以参考官方文档: https://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html)
- # 2018-2-26
- # By TimeStamp
- # cnblogs: http://www.cnblogs.com/AdaminXie
- """
cv2.VideoCapture(), 创建 cv2 摄像头对象 / open the default camera
- Python: cv2.VideoCapture() <VideoCapture object>
- Python: cv2.VideoCapture(filename) <VideoCapture object>
- filename name of the opened video file (eg. video.avi) or image sequence (eg. img_%02d.jpg, which will read samples like img_00.jpg, img_01.jpg, img_02.jpg, ...)
- Python: cv2.VideoCapture(device) <VideoCapture object>
- device id of the opened video capturing device (i.e. a camera index). If there is a single camera connected, just pass 0.
- """cap = cv2.VideoCapture(0)"""
cv2.VideoCapture.set(propId, value), 设置视频参数;
- propId:
- CV_CAP_PROP_POS_MSEC Current position of the video file in milliseconds.
- CV_CAP_PROP_POS_FRAMES 0-based index of the frame to be decoded/captured next.
- CV_CAP_PROP_POS_AVI_RATIO Relative position of the video file: 0 - start of the film, 1 - end of the film.
- CV_CAP_PROP_FRAME_WIDTH Width of the frames in the video stream.
- CV_CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream.
- CV_CAP_PROP_FPS Frame rate.
- CV_CAP_PROP_FOURCC 4-character code of codec.
- CV_CAP_PROP_FRAME_COUNT Number of frames in the video file.
- CV_CAP_PROP_FORMAT Format of the Mat objects returned by retrieve() .
- CV_CAP_PROP_MODE Backend-specific value indicating the current capture mode.
- CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras).
- CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras).
- CV_CAP_PROP_SATURATION Saturation of the image (only for cameras).
- CV_CAP_PROP_HUE Hue of the image (only for cameras).
- CV_CAP_PROP_GAIN Gain of the image (only for cameras).
- CV_CAP_PROP_EXPOSURE Exposure (only for cameras).
- CV_CAP_PROP_CONVERT_RGB Boolean flags indicating whether images should be converted to RGB.
- CV_CAP_PROP_WHITE_BALANCE_U The U value of the whitebalance setting (note: only supported by DC1394 v 2.x backend currently)
- CV_CAP_PROP_WHITE_BALANCE_V The V value of the whitebalance setting (note: only supported by DC1394 v 2.x backend currently)
- CV_CAP_PROP_RECTIFICATION Rectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently)
- CV_CAP_PROP_ISO_SPEED The ISO speed of the camera (note: only supported by DC1394 v 2.x backend currently)
- CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently)
value: 设置的参数值 / Value of the property
"""cap.set(3, 480)"""
cv2.VideoCapture.isOpened(), 检查摄像头初始化是否成功 / check if we succeeded
返回 true 或 false
"""cap.isOpened()"""
cv2.VideoCapture.read([imgage]) -> retval,image, 读取视频 / Grabs, decodes and returns the next video frame
返回两个值:
一个是布尔值 true/false, 用来判断读取视频是否成功 / 是否到视频末尾
图像对象, 图像的三维矩阵
- """
- flag, im_rd = cap.read()
2.2 人脸特征点标定
调用预测器 shape_predictor_68_face_landmarks.dat 进行 68 点标定, 这是 dlib 训练好的模型, 可以直接调用进行人脸 68 个人脸特征点的标定;
具体可以参考我的另一篇博客 (http://www.cnblogs.com/AdaminXie/p/8137580.html);
2.3 源码
实现的方法比较简单:
利用 cv2.VideoCapture() 创建摄像头对象, 然后利用 flag, im_rd = cv2.VideoCapture.read() 读取摄像头视频, im_rd 就是视频中的一帧帧图像;
然后就类似于单张图像进行人脸检测, 对这一帧帧的图像 im_rd 利用 dlib 进行特征点标定, 然后绘制特征点;
你可以按下 s 键来获取当前截图, 或者按下 q 键来退出摄像头;
- # 2018-2-26
- # By TimeStamp
- # cnblogs: http://www.cnblogs.com/AdaminXie
- # github: https://github.com/coneypo/Dlib_face_detection_from_camera
- import dlib #人脸识别的库 dlib
- import numpy as np #数据处理的库 numpy
- import cv2 #图像处理的库 OpenCv
- # dlib 预测器
- detector = dlib.get_frontal_face_detector()
- predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
- # 创建 cv2 摄像头对象
- cap = cv2.VideoCapture(0)
- # cap.set(propId, value)
- # 设置视频参数, propId 设置的视频参数, value 设置的参数值
- cap.set(3, 480)
- # 截图 screenshoot 的计数器
- cnt = 0
- # cap.isOpened() 返回 true/false 检查初始化是否成功
- while(cap.isOpened()):
- # cap.read()
- # 返回两个值:
- # 一个布尔值 true/false, 用来判断读取视频是否成功 / 是否到视频末尾
- # 图像对象, 图像的三维矩阵
- flag, im_rd = cap.read()
- # 每帧数据延时 1ms, 延时为 0 读取的是静态帧
- k = cv2.waitKey(1)
- # 取灰度
- img_gray = cv2.cvtColor(im_rd, cv2.COLOR_RGB2GRAY)
- # 人脸数 rects
- rects = detector(img_gray, 0)
- #print(len(rects))
- # 待会要写的字体
- font = cv2.FONT_HERSHEY_SIMPLEX
- # 标 68 个点
- if(len(rects)!=0):
- # 检测到人脸
- for i in range(len(rects)):
- landmarks = np.matrix([[p.x, p.y] for p in predictor(im_rd, rects[i]).parts()])
- for idx, point in enumerate(landmarks):
- # 68 点的坐标
- pos = (point[0, 0], point[0, 1])
- # 利用 cv2.circle 给每个特征点画一个圈, 共 68 个
- cv2.circle(im_rd, pos, 2, color=(0, 255, 0))
- # 利用 cv2.putText 输出 1-68
- cv2.putText(im_rd, str(idx + 1), pos, font, 0.2, (0, 0, 255), 1, cv2.LINE_AA)
- cv2.putText(im_rd, "faces:"+str(len(rects)), (20,50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)
- else:
- # 没有检测到人脸
- cv2.putText(im_rd, "no face", (20, 50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)
- # 添加说明
- im_rd = cv2.putText(im_rd, "s: screenshot", (20, 400), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
- im_rd = cv2.putText(im_rd, "q: quit", (20, 450), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
- # 按下 s 键保存
- if (k == ord('s')):
- cnt+=1
- cv2.imwrite("screenshoot"+str(cnt)+".jpg", im_rd)
- # 按下 q 键退出
- if(k==ord('q')):
- break
- # 窗口显示
- cv2.imshow("camera", im_rd)
- # 释放摄像头
- cap.release()
- # 删除建立的窗口
- cv2.destroyAllWindows()
- # 请尊重他人劳动成果, 转载或者使用源码请注明出处: http://www.cnblogs.com/AdaminXie
- # 如果对您有帮助, 欢迎在 GitHub 上 star 本项目: https://github.com/coneypo/Dlib_face_detection_from_camera
来源: https://www.cnblogs.com/AdaminXie/p/8472743.html