Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/sql-server-2005/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python人脸识别脚本意外停止_Python_Numpy_Opencv_Face Recognition_Dlib - Fatal编程技术网

Python人脸识别脚本意外停止

Python人脸识别脚本意外停止,python,numpy,opencv,face-recognition,dlib,Python,Numpy,Opencv,Face Recognition,Dlib,这实际上不是我的脚本,但在与创建者进行了多次往返之后,他和我一样感到困惑,因此我代表他发布,因为他无法复制问题并调试它 该脚本名为faceripper9000,基于,它应该获取一个目录,其中包含MP4,以及一张目标人物的照片,然后输出剪辑中出现的每一帧 你可以看到脚本,但我也会把它添加到这篇文章的底部 依赖项包括numpy、opencv、dlib和face_recognition,所有这些看起来都安装正确,但当我们其他人运行脚本时,我们会收到以下消息: File "demo.py", line

这实际上不是我的脚本,但在与创建者进行了多次往返之后,他和我一样感到困惑,因此我代表他发布,因为他无法复制问题并调试它

该脚本名为faceripper9000,基于,它应该获取一个目录,其中包含MP4,以及一张目标人物的照片,然后输出剪辑中出现的每一帧

你可以看到脚本,但我也会把它添加到这篇文章的底部

依赖项包括numpy、opencv、dlib和face_recognition,所有这些看起来都安装正确,但当我们其他人运行脚本时,我们会收到以下消息:

File "demo.py", line 161, in
os.rename(i, too_small + "/too small-" + str(counter) + random_string(15) + ".jpg")
NameError: name 'random_string' is not defined
我们可以通过将print Target image loaded+Target_image替换为print Target image loaded+strtarget_image来解决这个问题,但当我们再次运行它时,脚本开始加载目标图像,然后在打印出如下数字数组后停止几行:

[ INFO:0] Initialize OpenCL runtime...
Using OpenCL: True.
Output directory: /Users/example/Development/deep/pic/pic_output.
Scanned videos will be moved to: /Users/example/Development/deep/pic/pic_scanned_vids.
Target image loaded[[[199 196 191]
  [199 196 191]
  [199 196 191]
  ...
  [184 180 177]
  [184 180 177]
  [184 180 177]]

 [[199 196 191]
  [199 196 191]
  [199 196 191]
  ...
  [184 180 177]
  [184 180 177]
  [184 180 177]]

 [[199 196 191]
  [199 196 191]
  [200 197 192]
  ...
  [184 180 177]
  [184 180 177]
  [184 180 177]]

 ...

 [[166 166 166]
  [164 164 164]
  [165 165 167]
  ...
  [148 149 151]
  [146 148 145]
  [146 147 142]]

 [[166 166 166]
  [164 164 164]
  [165 165 167]
  ...
  [149 150 152]
  [146 148 145]
  [146 147 142]]

 [[166 166 166]
  [164 164 164]
  [165 165 167]
  ...
  [150 151 153]
  [146 148 145]
  [146 147 142]]]
我已经在一些不同版本的python上试用过,但似乎没有什么变化

以下是脚本:

import face_recognition
import numpy as np
import cv2
import glob
import random
import string
import os
import math
import argparse


os.system('cls' if os.name=='nt' else 'clear')

parser = argparse.ArgumentParser();
parser.add_argument('-i', type=str, help='Image of target face to scan for.', required=True)
parser.add_argument('-v', type=str, help='Video to process', required=True)
parser.add_argument('-t', type=float, help='Tolerance of face detection, lower is stricter. (0.1-1.0)', default=0.6)
parser.add_argument('-f', type=int, help='Amount of frames per second to extract.', default=25)
parser.add_argument('-n', type=int, help='Number of frames with target face to save from each vid.', default=1000)
parser.add_argument('-s', type=int, help='Minimum KB size of images to keep in the faceset.', default=32)
args = vars(parser.parse_args())

if args['t'] > 1.0:
    args['t'] = 1.0
elif args['t'] < 0.1:
    args['t'] = 0.1

min_KB = args['s']
tol = args['t']
xfps = args['f']
targfname = args['i']
vid_dir = args['v']
faces_from_each_video = args['n']

if faces_from_each_video < 1:
    faces_from_each_video = 1000

if min_KB < 1:
    min_KB = 32

print("Target filename: " + targfname + ".")
print("Video input directory: " + vid_dir + ".")
print("Tolerance: " + str(tol) + ".")
print("Number of confirmed faces saved from each video: " + str(faces_from_each_video) + ".")

if(cv2.ocl.haveOpenCL()):
    cv2.ocl.setUseOpenCL(True)
    print("Using OpenCL: " + str(cv2.ocl.useOpenCL()) + ".")

target_image = face_recognition.load_image_file(targfname)
outdir = str(str(os.path.splitext(targfname)[0]) + "_output");
scanned_vids = str(str(os.path.splitext(targfname)[0]) + "_scanned_vids");
too_small = str(str(os.path.splitext(targfname)[0]) + "_too_small");

#check if output directories already exists, and if not, create it
os.makedirs(outdir, exist_ok=True)
os.makedirs(scanned_vids, exist_ok=True)
os.makedirs(too_small, exist_ok=True)

print("Output directory: " + outdir + ".")
print("Scanned videos will be moved to: " + scanned_vids + ".")

try:
    print ("Target image loaded" + str(target_image))
    target_encoding = face_recognition.face_encodings(target_image)[0]
except IndexError:
    print("No face found in target image.")
    raise SystemExit(0)
vid = True
while(vid):
    try:
        vid = random.choice(glob.glob(vid_dir + '*.mp4'))
        print("Now looking at video: " + vid)
        input_video = cv2.VideoCapture(vid)

        framenum = 0
        vidheight = input_video.get(4)
        vidwidth = input_video.get(3)
        vidfps = input_video.get(cv2.CAP_PROP_FPS)
        totalframes = input_video.get(cv2.CAP_PROP_FRAME_COUNT)
        outputsize = 256, 256

        if xfps > vidfps:
            xfps = vidfps

        print("Frame Width: " + str(vidwidth) + ", Height: " + str(vidheight) + ".")

        known_faces = [
            target_encoding
        ]

        def random_string(length):
            return ''.join(random.choice(string.ascii_letters) for m in range(length))

        #switch to output directory
        os.chdir(str(os.path.splitext(targfname)[0]) + "_output")

        written = 1
        while(input_video.isOpened()):
            input_video.set(1, (framenum + (vidfps/xfps)))
            framenum += vidfps/xfps
            ret, frame = input_video.read()

            if not ret:
                break

            percentage = (framenum/totalframes)*100
            print("Checking frame " + str(int(framenum)) + "/" + str(int(totalframes)) + str(" (%.2f%%)" % percentage))

            rgb_frame = frame[:, :, ::-1]

            face_locations = face_recognition.face_locations(rgb_frame)
            face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)

            for fenc, floc in zip(face_encodings, face_locations):
                istarget = face_recognition.compare_faces(known_faces, fenc, tolerance=float(tol))

                #if the face found matches the target
                if istarget[0]:
                    top, right, bottom, left = floc
                    facefound = True
                    #squaring it up
                    if (bottom - top) > (right - left):
                        right = left + (bottom - top)
                    elif (right - left) > (bottom - top):
                        bottom = top + (right - left)
                    #calculating the diagonal of the cropped face for rotation purposes
                    #diagonal = math.sqrt(2*(bottom - top))
                    #padding = diagonal / 2
                    #alignment script causes images cropped "too closely" to get a bit fucky, so crop them less severely.
                    padding = (bottom - top)/2

                    if((top - padding >= 0) and (bottom + padding <= vidheight) and (left - padding >= 0) and (right + padding <= vidwidth)):
                        croppedframe = frame[int(top - padding):int(bottom + padding), int(left - padding):int(right + padding)]
                        #if the image is too small, resize it to outputsize
                        cheight, cwidth, cchannels = croppedframe.shape
                        if (cheight < 256) or (cwidth < 256):
                            croppedframe = cv2.resize(croppedframe, outputsize, interpolation=cv2.INTER_CUBIC)
                        print('Writing image ' + str(written) + '.')
                        cv2.imwrite(("vid_" + str(zz) + random_string(15) + ".jpg"), croppedframe, [int(cv2.IMWRITE_JPEG_QUALITY), 98])
                        written += 1
            if percentage > 99.9:
                os.rename(vid, scanned_vids + '/vid' + str(zz) + '_' + random_string(5) + '.mp4')
                break
            if written > faces_from_each_video:
                os.rename(vid, scanned_vids + '/vid' + str(zz) + '_' + random_string(5) + '.mp4')
                break
        input_video.release()
    except ValueError:
        print ("Scanning videos complete.")
        pass
    except IndexError:
        pass
#Removes images under 32KB
counter = 0
low_quat = min_KB * 1000
for xx in (os.listdir(os.getcwd())):
    if(os.path.getsize(xx)) < low_quat:
        os.rename(xx, too_small + "/too small-" + str(counter) + random_string(15) + ".jpg")
        print ("Moving " + str(xx) + " to the too small folder")
        counter += 1


#Remove images with more than one face
print ("Now double checking there is only one face in each photo")
for yy in (os.listdir(os.getcwd())):
    # Load the jpg file into a numpy array
    image = face_recognition.load_image_file(yy)

    # Find all the faces in the image using a pre-trained convolutional neural network.
    # This method is more accurate than the default HOG model, but it's slower
    # unless you have an nvidia GPU and dlib compiled with CUDA extensions. But if you do,
    # this will use GPU acceleration and perform well.
    # See also: find_faces_in_picture.py
    face_locations = face_recognition.face_locations(image, number_of_times_to_upsample=0, model="cnn")

    print("I found {} face(s) in this photograph.".format(len(face_locations)))

    if not (len(face_locations)) == 1:
        os.remove(yy)
        print (str(yy) + ' was removed')
我甚至不知道从哪里开始尝试解决这个问题,所以任何帮助都将不胜感激

我们可以看到,正在执行print Target image loaded+strtarget\u image,因为加载的目标图像已打印,如果已打印,则strtarget\u image也将打印,因此接下来的内容必须就是这样`[[[199 196 191]…看起来像一个由RGB三元组组成的二维数组


也许该程序旨在打印目标图像的名称,而不是其内容。

为什么要在while循环中定义random_stringlength???将其置于循环之外。顺便说一句,您的try/except块太宽了。请下载Pycharm或其他IDE,然后开始逐行调试。