Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 做一个只扫描我脸上1/2的眼睛跟踪器_Python_Opencv - Fatal编程技术网

Python 做一个只扫描我脸上1/2的眼睛跟踪器

Python 做一个只扫描我脸上1/2的眼睛跟踪器,python,opencv,Python,Opencv,我最初的想法可以在下面的代码中看到,但它仍然可以在我的下半张脸上发现眼睛。目标是让它只扫描我的上半张脸,从而剔除不正确的匹配 import cv2 face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml') cap = cv2.VideoCapture(0) # sets up w

我最初的想法可以在下面的代码中看到,但它仍然可以在我的下半张脸上发现眼睛。目标是让它只扫描我的上半张脸,从而剔除不正确的匹配

import cv2


face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')

cap = cv2.VideoCapture(0)  # sets up webcam

while 1:  # capture frame, converts to greyscale, looks for faces
    ret, img = cap.read()
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)

    for (x, y, w, h) in faces:  # draws box around face
        cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
        roi_gray = gray[y:y + h, x:x + w]
        roi_color = img[y:y + h, x:x + w]
        half_point = y
        print("half point: " + str(half_point))
        eyes = eye_cascade.detectMultiScale(roi_gray)  # looks for eyes
        for (ex, ey, ew, eh) in eyes:  # draws boxes around eyes
            check_point = ey
            print("check_point: " + str(check_point))
            if check_point > half_point:
                pass
            else:
                cv2.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2)

    cv2.imshow('img', img)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

我只修改了第15行和第16行

cv2.rectangle(img, (x, y), (x + w, y + int(h / 2)), (255, 0, 0), 2)
roi_gray = gray[y:y + int(h / 2), x:x + w]
完整代码:

import cv2


face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')

cap = cv2.VideoCapture(0)  # sets up webcam

while 1:  # capture frame, converts to greyscale, looks for faces
    ret, img = cap.read()
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)

    for (x, y, w, h) in faces:  # draws box around face
        cv2.rectangle(img, (x, y), (x + w, y + int(h / 2)), (255, 0, 0), 2) #Modified
        roi_gray = gray[y:y + int(h / 2), x:x + w] #Modified
        roi_color = img[y:y + h, x:x + w]
        half_point = y
        print("half point: " + str(half_point))
        eyes = eye_cascade.detectMultiScale(roi_gray)  # looks for eyes
        for (ex, ey, ew, eh) in eyes:  # draws boxes around eyes
            check_point = ey
            print("check_point: " + str(check_point))
            if check_point > half_point:
                pass
            else:
                cv2.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2)

    cv2.imshow('img', img)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
但是,我建议使用dlib insead。它更可靠

以下是我的例子:

import numpy as np
import cv2
import dlib


cap = cv2.VideoCapture(0)

predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
detector = dlib.get_frontal_face_detector()

def draw_on_frame(eye):
    coordinates = np.array([])
    for i in eye:
        x = landmarks.part(i).x
        y = landmarks.part(i).y
        cv2.circle(frame, (x, y), 3, (0, 0, 255), -1)
        coordinates = np.append(coordinates, [x, y])
    x1, y1, w1, h1 = cv2.boundingRect(coordinates.reshape(-1, 2).astype(int))
    cv2.rectangle(frame, (x1, y1), (x1 + w1, y1 + h1), (0, 255, 0), 1)
    return x1, y1, w1, h1


while (cap.isOpened()):
    ret, frame = cap.read()
    if ret == True:
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        faces = detector(gray)

        for face in faces:
            landmarks = predictor(gray, face)

            left_eye = range(36, 42)
            right_eye = range(42, 48)

            left = draw_on_frame(left_eye)
            right = draw_on_frame(right_eye)

            roi_left = frame[left[1]:left[1]+left[3], left[0]:left[0]+left[2]]
            roi_right = frame[right[1]:right[1] + right[3], right[0]:right[0] + right[2]]

        cv2.imshow('frame', frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
          break

    else:
        break

cap.release()
cv2.destroyAllWindows()

使用人脸检测仪haar cascade,从中间到顶部裁剪边界框,并进行如下小调整
roi\u gray=gray[y+h//2:y+h,x:x+w]
,您可能还需要对
roi\u颜色进行此编辑。注意:我不确定
y+
是朝下还是朝上。如果它指向下方,那么它应该是
roi\u gray=gray[y:y+h//2,x:x+w]
。。。我在过去也遇到过同样的问题,最有用的改变是使用
eye\u矩形,拒绝级别,confidence=self.CLF\u EYES.detectMultiScale3(灰色[y:y+高度,x:x+宽度],scaleFactor=1.1,minNeighbors=min\u邻居,minSize=(15),outputRejectLevels=True)
注意
detectMultiScale3
提供了置信水平。增加min_Neights将有助于忽略较差的匹配,我建议使用min_Neights=7或7左右的值。最后,设置最小值将减少计算时间。Stackoverflow不显示行号-哪一个是
15、16
?您可以在完整代码之前显示这些行。我修改了这两行:cv2。矩形(img,(x,y),(x+w,y+int(h/2)),(255,0,0),2)roi_gray=gray[y:y+int(h/2),x:x+w]把这个添加到答案中,这样所有人都会看到它。这个答案也适用于其他探视者。