Python 扩展区域凸轮

Python 扩展区域凸轮,python,mask,cv2,dlib,Python,Mask,Cv2,Dlib,使用Dlibs面部索引,我发现每只眼睛在我的脸上都有索引点。 在对它们都做了一个凸包之后,我在视频帧中用cv2.fillConvexPoly屏蔽了它们。 我想知道是否有可能扩展这些凸出的外壳区域,这样就可以看到我眼睛周围的更多图像,而不仅仅是眼睛的内部(如图2所示)。基本上,我想延长蒙面眼轮廓。任何帮助都将不胜感激 \\ \ # import the necessary packages from scipy.spatial import distance as dist from scip

使用Dlibs面部索引,我发现每只眼睛在我的脸上都有索引点。 在对它们都做了一个凸包之后,我在视频帧中用cv2.fillConvexPoly屏蔽了它们。 我想知道是否有可能扩展这些凸出的外壳区域,这样就可以看到我眼睛周围的更多图像,而不仅仅是眼睛的内部(如图2所示)。基本上,我想延长蒙面眼轮廓。任何帮助都将不胜感激

\\

\

# import the necessary packages
from scipy.spatial import distance as dist
from scipy.spatial import ConvexHull, convex_hull_plot_2d
from imutils.video import VideoStream
from imutils import face_utils
import numpy as np
import argparse
import imutils
import time
import dlib
import cv2

 
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-p", "--shape-predictor", required=True,
    help="path to facial landmark predictor")
ap.add_argument("-v", "--video", type=str, default="",
    help="path to input video file")
args = vars(ap.parse_args())
 [enter image description here][1]

# initialize dlib's face detector (HOG-based) and then create
# the facial landmark predictor
print("[INFO] loading facial landmark predictor...")
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(args["shape_predictor"])

# grab the indexes of the facial landmarks for the left and
# right eye, respectively
(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]


# start the video stream thread
print("[INFO] starting video stream thread...")

#fileStream = True
vs = VideoStream(src=0).start()
# vs = VideoStream(usePiCamera=True).start()
fileStream = False
time.sleep(1.0)

# loop over frames from the video stream
while True:
    # if this is a file video stream, then we need to check if
    # there any more frames left in the buffer to process

    # grab the frame from the threaded video file stream, resize
    # it, and convert it to grayscale
    # channels)
    frame = vs.read()
    frame = imutils.resize(frame, width=800, height=500 )
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    mask = np.zeros_like(gray)

    # detect faces in the grayscale frame
    rects = detector(gray, 0)

    # loop over the face detections
    for rect in rects:
        # determine the facial landmarks for the face region, then
        # convert the facial landmark (x, y)-coordinates to a NumPy
        # array
        shape = predictor(gray, rect)
        shape = face_utils.shape_to_np(shape)

        # extract the left and right eye coordinates, then use the
        # coordinates to compute the eye aspect ratio for both eyes
        leftEye = shape[lStart:lEnd]
        rightEye = shape[rStart:rEnd]
        leftEAR = eye_aspect_ratio(leftEye)
        

        # compute the convex hull for the left and right eye, then
        # visualize each of the eyes
        leftEyeHull = cv2.convexHull(leftEye)
        rightEyeHull = cv2.convexHull(rightEye)
        
        cv2.fillConvexPoly(mask, leftEyeHull, 255,)
        cv2.fillConvexPoly(mask, rightEyeHull, 255,)

        eyes = cv2.bitwise_and(frame, frame, mask=mask)

# show the frame
    cv2.imshow("eyes", eyes)
    cv2.imshow("maks", mask)
    key = cv2.waitKey(1) & 0xFF
 
    # if the `q` key was pressed, break from the loop
    if key == ord("q"):
        break

# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()