Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/327.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何停止将opencv帧流式传输到浏览器_Python_Opencv_Video Streaming_Face Recognition - Fatal编程技术网

Python 如何停止将opencv帧流式传输到浏览器

Python 如何停止将opencv帧流式传输到浏览器,python,opencv,video-streaming,face-recognition,Python,Opencv,Video Streaming,Face Recognition,我正在尝试将opencv帧流式传输到浏览器。经过研究,我看到了米格尔的教程: 让我来分析一下我想要实现的目标:在主页上,我试图用opencv实时流式传输opencv帧,在另一个页面上,我需要使用网络摄像头拍照 问题:使用Miguel的流媒体方式进入浏览器,会启动一个无限线程,在这种情况下,当我想在另一页上拍照时,不会释放相机。切换回主页时,出现以下错误: VIDEOIO错误:V4L2:OpenCV不支持传入图像的像素格式 无法停止流:设备或资源正忙 视频流已启动 OpenCV(3.4.1)错误

我正在尝试将opencv帧流式传输到浏览器。经过研究,我看到了米格尔的教程:

让我来分析一下我想要实现的目标:在主页上,我试图用opencv实时流式传输opencv帧,在另一个页面上,我需要使用网络摄像头拍照

问题:使用Miguel的流媒体方式进入浏览器,会启动一个无限线程,在这种情况下,当我想在另一页上拍照时,不会释放相机。切换回主页时,出现以下错误:

VIDEOIO错误:V4L2:OpenCV不支持传入图像的像素格式
无法停止流:设备或资源正忙
视频流已启动
OpenCV(3.4.1)错误:cvtColor文件/home/eli/cv/OpenCV-3.4.1/modules/imgproc/src/color.cpp第11115行中的断言失败(scn==3 | | scn==4)
调试中间件在已发送响应头的流式响应中捕获到异常

这是我的密码:

detect_face_video.py
这是我执行人脸识别的地方

# import the necessary packages
 from imutils.video import VideoStream
 import face_recognition
 import argparse
 import imutils
 import pickle
 import time
 import cv2
 from flask import Flask, render_template, Response
 import sys
 import numpy
 from app.cv_func import draw_box
 import redis
 import datetime
 from app.base_camera import BaseCamera



 import os 


 global red
 red = redis.StrictRedis(host='localhost', port=6379, db=0, decode_responses=True)



class detect_face:



def gen(self):
    i=1
    while i<10:
        yield (b'--frame\r\n'
            b'Content-Type: text/plain\r\n\r\n'+str(i)+b'\r\n')
        i+=1


def get_frame(self):

    dir_path = os.path.dirname(os.path.realpath(__file__))
    # load the known faces and embeddings
    print("[INFO] loading encodings...")
     "rb").read())
    data = pickle.loads(open("%s/encode.pickle"%dir_path, "rb").read())

    # initialize the video stream and pointer to output video file, then
    # allow the camera sensor to warm up
    print("[INFO] starting video stream...")

    try:
        vs = VideoStream(src=1).start()

    except Exception as ex:
        vs.release()



    print("video stream started")


    # loop over frames from the video file stream
    i=1
    counter = 1
    while True:

        # grab the frame from the threaded video stream
        try:
            frame = vs.read()
        except Exception as ex:
            print("an error occured here")
            print(ex)
        # finally:
            continue

        # convert the input frame from BGR to RGB then resize it to have
        # a width of 750px (to speedup processing)
        rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        rgb = imutils.resize(frame, width=450, height=400)
        r = frame.shape[1] / float(rgb.shape[1])


        # detect the (x, y)-coordinates of the bounding boxes
        # corresponding to each face in the input frame, then compute
        # the facial embeddings for each face
        boxes = face_recognition.face_locations(rgb,
            model="hog")
        # boxes = face_recognition.face_locations(rgb,
        #   model=args["detection_method"])
        encodings = face_recognition.face_encodings(rgb, boxes)
        names = []


        # loop over the facial embeddings

        for encoding in encodings:
            print(encoding)
            # attempt to match each face in the input image to our known
            # encodings
            matches = face_recognition.compare_faces(data["encodings"],
                encoding)
            # matches = face_recognition.compare_faces(data["encodings"],
            #   encoding)
            name = "Unknown"  

            # check to see if we have found a match
            if True in matches:
                # find the indexes of all matched faces then initialize a
                # dictionary to count the total number of times each face
                # was matched
                matchedIdxs = [i for (i, b) in enumerate(matches) if b]
                counts = {}

                # loop over the matched indexes and maintain a count for
                # each recognized face face
                for i in matchedIdxs:
                    name = data["names"][i]
                    counts[name] = counts.get(name, 0) + 1

                # determine the recognized face with the largest number
                # of votes (note: in the event of an unlikely tie Python
                # will select first entry in the dictionary)
                name = max(counts, key=counts.get)

            # update the list of names
            names.append(name)
            red.set('currentName', name)



            # self.create_report(name, counter)
            # f = open("tester.txt", 'w+')
            key='StudentName%d'%counter

            if(name != 'Unknown'):
                red.set(key,name)
            red.set('counter', counter)



            counter+=1

            # loop over the recognized faces
        for ((top, right, bottom, left), name) in zip(boxes, names):
            # rescale the face coordinates
            top = int(top * r)
            right = int(right * r)
            bottom = int(bottom * r)
            left = int(left * r)
            # print("top: %d right: %d bottom: %d left: %d"%(top,right,bottom,left))
            # print("top_: %d right_: %d bottom_: %d left_: %d"%(top_,right_,bottom_,left_))

            # draw the predicted face name on the image
            cv2.rectangle(frame, (left, top), (right, bottom),
                (0, 255, 0), 2)
            # draw_box(frame, int(left/2), int(top/2), int(right/2), int(bottom/2))
            y = top - 15 if top - 15 > 15 else top + 15
            cv2.putText(frame, name, (left, y), cv2.FONT_HERSHEY_SIMPLEX,
                0.75, (0, 255, 0), 2)

        imgencode=cv2.imencode('.jpg',frame)[1]
        stringData = imgencode.tostring()
        yield(b'--frame\r\n'
                b'Content-Type: text/plain\r\n\r\n'+stringData+b'\r\n')
        i+=1

    del(vs)
    cv2.destroyAllWindows()
    vs.stop()

我怎样才能停止或说暂停-在我离开该页面(主页)的任何时候流式处理?

如果您正在寻找更快、更健壮、更简单的方式将帧流式处理到浏览器,那么您可以使用我的Python库,它是一个强大的ASGI视频流API,建立在一个轻量级ASGI异步框架/工具包之上

要求:仅适用于Python 3.6+版本。

然后您可以使用这个完整的python示例,它在网络上任何浏览器的地址运行视频服务器,只需几行代码:

#导入libs
进口乌维康
从vidgear.gears导入WebGear
#各种性能调整
选项={“帧大小减少”:40,“帧jpeg质量”:80,“帧jpeg优化”:True,“帧jpeg渐进式”:False}
#使用合适的视频文件初始化WebGear应用程序(例如“foo.mp4”)
web=WebGear(source=“foo.mp4”,logging=True,**选项)
#在地址为的Uvicorn服务器上运行此应用http://0.0.0.0:8000/
运行(web(),主机=0.0.0.0',端口=8000)
#安全关闭应用程序
web.shutdown()

如果仍然得到一些错误,在其GitHub回购中提出一个

 from flask import Flask, render_template, request,Response,jsonify,make_response
 from app.detect_face_video import detect_face
 detect = detect_face()     


 @app.route('/')
 def index():
 return render_template('index.html')


 def get_frame_():
    detect.gen()
    detect.get_frame()




 @app.route('/calc')
 def calc():
  #This function displays the video streams in the webpage 

    # detect.vs.stop()
    return Response(detect.get_frame(),mimetype='multipart/x-mixed-replace; boundary=frame')
# install VidGear
python3 -m pip install vidgear[asyncio]
# additional dependency
python3 -m pip install uvicorn