Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/326.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Open CV&;yolo目标检测与跟踪_Python_Opencv_Deep Learning_Yolo_Dlib - Fatal编程技术网

Python Open CV&;yolo目标检测与跟踪

Python Open CV&;yolo目标检测与跟踪,python,opencv,deep-learning,yolo,dlib,Python,Opencv,Deep Learning,Yolo,Dlib,我被这个密码深深打动了。我无法从边界构造dlib矩形对象 框坐标,然后启动dlib相关跟踪器。 我正在尝试使用yolo进行对象检测,并在接下来的30帧中使用这些边界框坐标进行对象跟踪。但是我无法让这个代码为我工作。请有人帮帮我 当我执行命令时,我得到了这些错误 F:\Python\People-Counting-in-Real-Time-master\People-Counting-in-Real-Time-master>python Test.py --yolo Yolo --input

我被这个密码深深打动了。我无法从边界构造dlib矩形对象 框坐标,然后启动dlib相关跟踪器。 我正在尝试使用yolo进行对象检测,并在接下来的30帧中使用这些边界框坐标进行对象跟踪。但是我无法让这个代码为我工作。请有人帮帮我

当我执行命令时,我得到了这些错误

F:\Python\People-Counting-in-Real-Time-master\People-Counting-in-Real-Time-master>python Test.py --yolo Yolo --input videos/T1.mp4 --output videos/Test112.avi
-s 2 [0x7FF99EEA7EA0] ANOMALY: meaningless REX prefix used [INFO] loading YOLO from disk... [INFO] Starting the video.. [0x7FF9A186F410] ANOMALY: meaningless REX prefix used Traceback (most recent call last):   File "Test.py", line 350, in <module>
    run()   File "Test.py", line 166, in run
    tracker.start_track(rgb, rect) RuntimeError:   Error detected at line 61. Error detected in file C:\Users\270938\AppData\Local\Temp\pip-install-iv_v5my6\dlib\dlib\image_processing/correlation_tracker.h. Error detected in function void __cdecl dlib::correlation_tracker::start_track<class dlib::numpy_image<struct dlib::rgb_pixel>>(const class dlib::numpy_image<struct dlib::rgb_pixel> &,const class dlib::drectangle &).   Failing expression was p.is_empty() == false.
         void correlation_tracker::start_track()
         You can't give an empty rectangle.
    F:\Python\People-Counting-in-Real-Time-master\People-Counting-in-Real-Time-master>python Test.py --yolo Yolo --input videos/T1.mp4 --output videos/Test112.avi
-s 10 [0x7FF99EEA7EA0] ANOMALY: meaningless REX prefix used [INFO] loading YOLO from disk... [INFO] Starting the video.. [0x7FF9A186F410] ANOMALY: meaningless REX prefix used Traceback (most recent call last):   File "Test.py", line 350, in <module>
    run()   File "Test.py", line 293, in run
    export_data = zip_longest(*d, fillvalue = '') TypeError: 'int' object is not iterable
F:\Python\person Counting in Real Time master\person Counting in Real Time master>Python Test.py--yolo-yolo--input videos/T1.mp4--output videos/Test112.avi
-S2[0x7FF99EEA7EA0]异常:使用了无意义的REX前缀[INFO]从磁盘加载YOLO。。。[信息]正在启动视频。。[0x7FF9A186F410]异常:使用了无意义的REX前缀回溯(最近一次调用最后一次):文件“Test.py”,第350行,在
run()文件“Test.py”,第166行,在run中
tracker.start_track(rgb,rect)运行时错误:在第61行检测到错误。在文件C:\Users\270938\AppData\Local\Temp\pip-install-iv_v5my6\dlib\dlib\image\u processing/correlation\u tracker.h中检测到错误。在函数void uu cdecl dlib::correlation\u tracker::start\u track(常量类dlib::numpy\u image&,常量类dlib::drectangle&)中检测到错误。失败的表达式为p。is_empty()==false。
无效关联\u跟踪程序::开始\u跟踪()
你不能给出一个空的矩形。
F:\Python\person Counting in Real Time master\person Counting in Real Time master>Python Test.py--yolo-yolo--input videos/T1.mp4--output videos/Test112.avi
-S10[0x7FF99EEA7EA0]异常:使用了无意义的REX前缀[INFO]从磁盘加载YOLO。。。[信息]正在启动视频。。[0x7FF9A186F410]异常:使用了无意义的REX前缀回溯(最近一次调用最后一次):文件“Test.py”,第350行,在
run()文件“Test.py”,第293行,在run中
export\u data=zip\u longest(*d,fillvalue='')类型错误:“int”对象不可编辑
github链接使用几乎相同的代码。我使用YOLO来检测,而不是Mobilenet ssd

from mylib.centroidtracker import CentroidTracker
from mylib.trackableobject import TrackableObject
from imutils.video import VideoStream
from imutils.video import FPS
from mylib.mailer import Mailer
from mylib import config, thread
import time, schedule, csv
import numpy as np
import argparse, imutils
import time, dlib, cv2, datetime
from itertools import zip_longest
import os
t0 = time.time()

def run():

        # construct the argument parse and parse the arguments
        ap = argparse.ArgumentParser()
        ap.add_argument("-i", "--input", required=True,
                help="path to input video")
        ap.add_argument("-o", "--output", required=True,
                help="path to output video")
        ap.add_argument("-y", "--yolo", required=True,
                help="base path to YOLO directory")
        ap.add_argument("-c", "--confidence", type=float, default=0.5,
                help="minimum probability to filter weak detections")
        ap.add_argument("-t", "--threshold", type=float, default=0.3,
                help="threshold when applyong non-maxima suppression")
        ap.add_argument("-s", "--skip-frames", type=int, default=30,
                help="# of skip frames between detections")
        args = vars(ap.parse_args())

        # load the COCO class labels our YOLO model was trained on
        labelsPath = os.path.sep.join([args["yolo"], "yolov3.txt"])
        CLASSES = open(labelsPath).read().strip().split("\n")
        # initialize a list of colors to represent each possible class label
        np.random.seed(42)
        COLORS = np.random.randint(0, 255, size=(len(CLASSES), 3),dtype="uint8")
        # derive the paths to the YOLO weights and model configuration
        weightsPath = os.path.sep.join([args["yolo"], "yolov3.weights"])
        configPath = os.path.sep.join([args["yolo"], "yolov3.cfg"])
        # load our YOLO object detector trained on COCO dataset (80 classes)
        # and determine only the *output* layer names that we need from YOLO
        print("[INFO] loading YOLO from disk...")
        net = cv2.dnn.readNetFromDarknet(configPath, weightsPath)
        ln = net.getLayerNames()
        ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]

        # if a video path was not supplied, grab a reference to the ip camera
        if not args.get("input", False):
                print("[INFO] Starting the live stream..")
                vs = VideoStream(config.url).start()
                time.sleep(2.0)

        # otherwise, grab a reference to the video file
        else:
                print("[INFO] Starting the video..")
                vs = cv2.VideoCapture(args["input"])

        # initialize the video writer (we'll instantiate later if need be)
        writer = None

        # initialize the frame dimensions (we'll set them as soon as we read
        # the first frame from the video)
        W = None
        H = None

        # instantiate our centroid tracker, then initialize a list to store
        # each of our dlib correlation trackers, followed by a dictionary to
        # map each unique object ID to a TrackableObject
        ct = CentroidTracker(maxDisappeared=40, maxDistance=50)
        trackers = []
        trackableObjects = {}

        # initialize the total number of frames processed thus far, along
        # with the total number of objects that have moved either up or down
        totalFrames = 0
        totalDown = 0
        totalUp = 0
        x = []
        empty=[]
        empty1=[]

        # start the frames per second throughput estimator
        fps = FPS().start()

        if config.Thread:
                vs = thread.ThreadingClass(config.url)

        # loop over frames from the video stream
        while True:
                # grab the next frame and handle if we are reading from either
                # VideoCapture or VideoStream
                frame = vs.read()
                frame = frame[1] if args.get("input", False) else frame

                # if we are viewing a video and we did not grab a frame then we
                # have reached the end of the video
                if args["input"] is not None and frame is None:
                        break

                # resize the frame to have a maximum width of 500 pixels (the
                # less data we have, the faster we can process it), then convert
                # the frame from BGR to RGB for dlib
                frame = imutils.resize(frame, width = 500)
                rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

                # if the frame dimensions are empty, set them
                if W is None or H is None:
                        (H, W) = frame.shape[:2]

                # if we are supposed to be writing a video to disk, initialize
                # the writer
                if args["output"] is not None and writer is None:
                        fourcc = cv2.VideoWriter_fourcc(*"MJPG")
                        writer = cv2.VideoWriter(args["output"], fourcc, 30,
                                (W, H), True)

                # initialize the current status along with our list of bounding
                # box rectangles returned by either (1) our object detector or
                # (2) the correlation trackers
                status = "Waiting"
                rects = []

                # check to see if we should run a more computationally expensive
                # object detection method to aid our tracker
                if totalFrames % args["skip_frames"] == 0:
                        # set the status and initialize our new set of object trackers
                        status = "Detecting"
                        trackers = []

                        # convert the frame to a blob and pass the blob through the
                        # network and obtain the detections
                        blob = cv2.dnn.blobFromImage(frame, 1/255, (416, 416), swapRB=True, crop=False)
                        net.setInput(blob)
                        detections = net.forward(ln)
                        # initialize our lists of detected bounding boxes, confidences,
                        # and class IDs, respectively
                        boxes = []
                        confidences = []
                        classIDs = []

                        # loop over the detections
                        for output in detections:
                                for detection in output:
                                        scores = detection[5:]
                                        classID = np.argmax(scores)
                                        confidence = scores[classID]
                                        if confidence > args["confidence"]:
                                                if classID != "person":
                                                        center_x = int(detection[0] * W)
                                                        center_y = int(detection[1] * H)
                                                        w=int(detection[2] * W)
                                                        h=int(detection[3] * H)
                                                        x = int (center_x - w / 2)
                                                        y = int (center_y - h /2 )
                                                        boxes.append([x, y, w, h])
                                                        confidences.append(float(confidence))
                                                        classIDs.append(classID)
                        indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.4, 0.3)
                        for i in range(len(boxes)):
                                if i in indexes:
                                        startX, startY, endX, endY = boxes[i]
                                        tracker = dlib.correlation_tracker()
                                        rect = dlib.rectangle(startX, startY, endX, endY)
                                        tracker.start_track(rgb, rect)
                                        trackers.append(tracker)

                # otherwise, we should utilize our object *trackers* rather than
                # object *detectors* to obtain a higher frame processing throughput
                else:
                        # loop over the trackers
                        for tracker in trackers:
                                # set the status of our system to be 'tracking' rather
                                # than 'waiting' or 'detecting'
                                status = "Tracking"

                                # update the tracker and grab the updated position
                                tracker.update(rgb)
                                pos = tracker.get_position()

                                # unpack the position object
                                startX = int(pos.left())
                                startY = int(pos.top())
                                endX = int(pos.right())
                                endY = int(pos.bottom())

                                # add the bounding box coordinates to the rectangles list
                                rects.append((startX, startY, endX, endY))

                # draw a horizontal line in the center of the frame -- once an
                # object crosses this line we will determine whether they were
                # moving 'up' or 'down'
                cv2.line(frame, (0, H // 2), (W, H // 2), (0, 0, 0), 3)
                # cv2.putText(frame, "-Prediction border - Entrance-", (10, H - ((i * 20) + 200)),
                #        cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 1)

                # use the centroid tracker to associate the (1) old object
                # centroids with (2) the newly computed object centroids
                objects = ct.update(rects)

                # loop over the tracked objects
                for (objectID, centroid) in objects.items():
                        # check to see if a trackable object exists for the current
                        # object ID
                        to = trackableObjects.get(objectID, None)

                        # if there is no existing trackable object, create one
                        if to is None:
                                to = TrackableObject(objectID, centroid)

                        # otherwise, there is a trackable object so we can utilize it
                        # to determine direction
                        else:
                                # the difference between the y-coordinate of the *current*
                                # centroid and the mean of *previous* centroids will tell
                                # us in which direction the object is moving (negative for
                                # 'up' and positive for 'down')
                                y = [c[1] for c in to.centroids]
                                direction = centroid[1] - np.mean(y)
                                to.centroids.append(centroid)

                                # check to see if the object has been counted or not
                                if not to.counted:
                                        # if the direction is negative (indicating the object
                                        # is moving up) AND the centroid is above the center
                                        # line, count the object
                                        if direction < 0 and centroid[1] < H // 2:
                                                totalUp += 1
                                                empty.append(totalUp)
                                                to.counted = True

                                        # if the direction is positive (indicating the object
                                        # is moving down) AND the centroid is below the
                                        # center line, count the object
                                        elif direction > 0 and centroid[1] > H // 2:
                                                totalDown += 1
                                                empty1.append(totalDown)
                                                #print(empty1[-1])
                                                x = []
                                                # compute the sum of total people inside
                                                x.append(len(empty1)-len(empty))
                                                #print("Total people inside:", x)
                                                # if the people limit exceeds over threshold, send an email alert
                                                if sum(x) >= config.Threshold:
                                                        cv2.putText(frame, "-ALERT: People limit exceeded-", (10, frame.shape[0] - 80),
                                                                cv2.FONT_HERSHEY_COMPLEX, 0.5, (0, 0, 255), 2)
                                                        if config.ALERT:
                                                                print("[INFO] Sending email alert..")
                                                                Mailer().send(config.MAIL)
                                                                print("[INFO] Alert sent")

                                                to.counted = True


                        # store the trackable object in our dictionary
                        trackableObjects[objectID] = to

                        # draw both the ID of the object and the centroid of the
                        # object on the output frame
                        text = "ID {}".format(objectID)
                        cv2.putText(frame, text, (centroid[0] - 10, centroid[1] - 10),
                                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
                        cv2.circle(frame, (centroid[0], centroid[1]), 4, (255, 255, 255), -1)

                # construct a tuple of information we will be displaying on the
                info = [
                ("Exit", totalUp),
                ("Enter", totalDown),
                ("Status", status),
                ]

                info2 = [
                ("Total people inside", x),
                ]

                # Display the output
                for (i, (k, v)) in enumerate(info):
                        text = "{}: {}".format(k, v)
                        cv2.putText(frame, text, (10, H - ((i * 20) + 20)), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 0), 2)

                for (i, (k, v)) in enumerate(info2):
                        text = "{}: {}".format(k, v)
                        cv2.putText(frame, text, (265, H - ((i * 20) + 60)), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 255), 2)

                if writer is not None:
                        writer.write(frame)

                # Initiate a simple log to save data at end of the day
                if config.Log:
                        datetimee = [datetime.datetime.now()]
                        d = [datetimee, empty1, empty, x]
                        export_data = zip_longest(*d, fillvalue = '')

                        with open('Log.csv', 'w', newline='') as myfile:
                                wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
                                wr.writerow(("End Time", "In", "Out", "Total Inside"))
                                wr.writerows(export_data)


                # show the output frame
                cv2.imshow("Real-Time Monitoring/Analysis Window", frame)
                key = cv2.waitKey(1) & 0xFF

                # if the `q` key was pressed, break from the loop
                if key == ord("q"):
                        break

                # increment the total number of frames processed thus far and
                # then update the FPS counter
                totalFrames += 1
                fps.update()

                if config.Timer:
                        # Automatic timer to stop the live stream. Set to 8 hours (28800s).
                        t1 = time.time()
                        num_seconds=(t1-t0)
                        if num_seconds > 28800:
                                break

        # stop the timer and display FPS information
        #fps.stop()
        #print("[INFO] elapsed time: {:.2f}".format(fps.elapsed()))
        #print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))


        # # if we are not using a video file, stop the camera video stream
        # if not args.get("input", False):
        #       vs.stop()
        #
        # # otherwise, release the video file pointer
        # else:
        #       vs.release()

        # close any open windows
        cv2.destroyAllWindows()


##learn more about different schedules here: https://pypi.org/project/schedule/
if config.Scheduler:
        ##Runs for every 1 second
        #schedule.every(1).seconds.do(run)
        ##Runs at every day (9:00 am). You can change it.
        schedule.every().day.at("9:00").do(run)

        while 1:
                schedule.run_pending()

else:
        run()
从mylib.centroidtracker导入centroidtracker
从mylib.trackableobject导入trackableobject
从imutils.video导入视频流
从imutils.video导入FPS
从mylib.mailer导入mailer
从mylib导入配置,线程
导入时间、计划、csv
将numpy作为np导入
导入argparse,imutils
导入时间、dlib、cv2、日期时间
从itertools导入zip\u
导入操作系统
t0=时间。时间()
def run():
#构造参数并解析参数
ap=argparse.ArgumentParser()
ap.add_参数(“-i”,“--input”,required=True,
help=“输入视频的路径”)
ap.add_参数(“-o”,“--output”,required=True,
help=“输出视频的路径”)
ap.add_参数(“-y”,“--yolo”,required=True,
help=“YOLO目录的基本路径”)
ap.add_参数(“-c”,“--confidence”,type=float,default=0.5,
help=“过滤弱检测的最小概率”)
ap.add_参数(“-t”,“--threshold”,type=float,default=0.3,
help=“应用非最大值抑制时的阈值”)
ap.add_参数(“-s”,“--skip frames”,type=int,default=30,
help=“#检测之间的跳过帧”)
args=vars(ap.parse_args())
#加载我们的YLO模型培训时使用的COCO类标签
labelPath=os.path.sep.join([args[“yolo”],“yolov3.txt”])
CLASSES=open(LabelPath).read().strip().split(“\n”)
#初始化颜色列表以表示每个可能的类标签
np.随机种子(42)
COLORS=np.random.randint(0,255,size=(len(CLASSES),3),dtype=“uint8”)
#导出YOLO权重和模型配置的路径
weightsPath=os.path.sep.join([args[“yolo”],“yolov3.weights”])
configPath=os.path.sep.join([args[“yolo”],“yolov3.cfg”])
#加载我们在COCO数据集上培训的YLO对象检测器(80类)
#并且只确定我们需要从YOLO获得的*output*图层名
打印(“[INFO]正在从磁盘加载YOLO…”)
net=cv2.dnn.readNetFromDarknet(配置路径,权重)
ln=net.getLayerNames()
ln=[ln[i[0]-1]表示net.getUnconnectedOutLayers()中的i
#如果未提供视频路径,请获取ip摄像头的参考
如果不是args.get(“输入”,False):
打印(“[INFO]正在启动实时流…”
vs=VideoStream(config.url).start()
时间。睡眠(2.0)
#否则,获取对视频文件的引用
其他:
打印(“[INFO]开始视频…”)
vs=cv2.VideoCapture(参数[“输入”])
#初始化视频编写器(如果需要,我们将稍后实例化)
writer=无
#初始化框架尺寸(我们将在读取后立即设置它们
#视频中的第一帧)
W=无
H=无
#实例化质心跟踪器,然后初始化要存储的列表
#我们的每个dlib相关跟踪器,后跟一个字典
#将每个唯一的对象ID映射到可跟踪对象
ct=质心跟踪器(maxDistance=40,maxDistance=50)
跟踪器=[]
trackableObjects={}
#初始化迄今为止处理的总帧数,以及
#向上或向下移动的对象总数
totalFrames=0
totalDown=0
totalUp=0
x=[]
空=[]
空1=[]
#启动每秒帧数吞吐量估计器
fps=fps().start()
如果是config.Thread:
vs=thread.ThreadingClass(config.url)
#循环播放视频流中的帧
尽管如此:
#抓取下一帧,如果我们正在阅读
#视频捕获或视频流
frame=vs.read()
帧=帧