Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/python-2.7/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python OpenCV-使用实时摄影机提要帧作为输入进行模板匹配_Python_Python 2.7_Opencv_Camera - Fatal编程技术网

Python OpenCV-使用实时摄影机提要帧作为输入进行模板匹配

Python OpenCV-使用实时摄影机提要帧作为输入进行模板匹配,python,python-2.7,opencv,camera,Python,Python 2.7,Opencv,Camera,不久前我在Android上试用过OpenCV之后,我又开始使用它了。现在,我正在用Python 2尝试OpenCV 2。到目前为止,我已经能够使用它来获取实时摄影机提要,并且在一个单独的项目中,我能够实现模板匹配,其中我将给出父图像和父图像中存在的一个小图像,并匹配父图像中的子图像,然后输出另一个在图像匹配上绘制红色矩形的图像 下面是模板匹配的代码。这没什么特别的,它和OpenCV网站上的一样: import cv2 import numpy as np from matplotlib impo

不久前我在Android上试用过OpenCV之后,我又开始使用它了。现在,我正在用Python 2尝试OpenCV 2。到目前为止,我已经能够使用它来获取实时摄影机提要,并且在一个单独的项目中,我能够实现模板匹配,其中我将给出父图像和父图像中存在的一个小图像,并匹配父图像中的子图像,然后输出另一个在图像匹配上绘制红色矩形的图像

下面是模板匹配的代码。这没什么特别的,它和OpenCV网站上的一样:

import cv2
import numpy as np
from matplotlib import pyplot as plt
img_rgb = cv2.imread('mario.jpg')
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
template = cv2.imread('mario_coin.png',0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)
threshold = 0.8
loc = np.where( res >= threshold)
for pt in zip(*loc[::-1]):
    cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,0,255), 2)
cv2.imwrite('res.png',img_rgb)
那么,关于我的实时摄像机源代码,我有以下内容:

# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2

# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))

# allow the camera to warmup
time.sleep(0.1)

# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
    # grab the raw NumPy array representing the image, then initialize the timestamp
    # and occupied/unoccupied text
    image = frame.array

    # show the frame
    cv2.imshow("Frame", image)
    key = cv2.waitKey(1) & 0xFF

    # clear the stream in preparation for the next frame
    rawCapture.truncate(0)

    # if the `q` key was pressed, break from the loop
    if key == ord("q"):
        break
到目前为止,这两个代码都工作得很好,彼此独立。我尝试的是在摄影机流代码显示任何内容之前,在零件中插入模板匹配代码

以下是我的想法:

from picamera.array import PiRGBArray
from picamera import PiCamera
from matplotlib import pyplot as plt

import time
import cv2
import numpy as np


# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))

template = cv2.imread('mario_coin.png', 0)


# allow the camera to warmup
time.sleep(0.1)

# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr",
                                       use_video_port=True):
    # grab the raw NumPy array representing the image,
    # then initialize the timestamp
    # and occupied/unoccupied text
    image = frame.array

    # we do something here
    # we get the image or something then run some matching
    # if we get a match, we draw a square on it or something
##    img_rbg = cv2.imread('mario.jpg')
    img_rbg = image

##    img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY)
    img_gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)



    w, h = template.shape[::-1]

    res = cv2.matchTemplate(img_gray, template, cv2.TM_CCOEFF_NORMED)

    threshold = 0.8

    loc = np.where(res >= threshold)

    for pt in zip(*loc[::-1]):
##        cv2.rectangle(img_rbg, pt, (pt[0] + w, pt[1] + h),
##                      (0,0,255), 2)
        cv2.rectangle(image, pt, (pt[0] + w, pt[1] + h),
                      (0,0,255), 2)

##    image = img_rgb


    # show the frame
    cv2.imshow("Frame", image)
    key = cv2.waitKey(1) & 0xFF

    # clear the stream in preparation for the next frame
    rawCapture.truncate(0)

    # if the `q` key was pressed, break from the loop
    if key == ord("q"):
        break
我试图做的是,我尝试使用相机输入的图像,并在我之前的模板匹配算法中使用它,而不是
cv2.imread(sample.png)

但发生的情况是,相机打开一秒钟(由灯光指示),然后关闭,程序停止

我真的不知道发生了什么事。有没有人知道如何使用实时摄像头提要作为模板匹配的输入


我使用的是Raspberry Pi 2和v1.3相机。

我已经遇到了同样的问题,问题是变量res 第一次启动脚本时,res为空,因此比较np中的空变量。where函数不起作用 所以你应该放一个:

  • 条件(如果恢复:)
  • 或异常(请尝试:…例外:)
我现在没有Pi,所以这是笔记本电脑摄像头和opencv的相同示例:

import cv2
import numpy as np

name = 'find.png' 
template = cv2.imread(name,0)
face_w, face_h = template.shape[::-1]

cv2.namedWindow('image')

cap = cv2.VideoCapture(0)

threshold = 1
ret = True

while ret :
    ret, img = cap.read()

    #flip the image  ! optional 
    img = cv2.flip(img,1)

    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)

    if len(res):
        location = np.where( res >= threshold)
        for i in zip(*location[::-1]):
            #puting  rectangle on recognized erea 
            cv2.rectangle(img, pt, (pt[0] + face_w, pt[1] + face_h), (0,0,255), 2)

    cv2.imshow('image',img)
    k = cv2.waitKey(5) & 0xFF
    if k == 27:
        break
cv2.destroyAllWindows()

我已经遇到了同样的问题,问题是变量res 第一次启动脚本时,res为空,因此比较np中的空变量。where函数不起作用 所以你应该放一个:

  • 条件(如果恢复:)
  • 或异常(请尝试:…例外:)
我现在没有Pi,所以这是笔记本电脑摄像头和opencv的相同示例:

import cv2
import numpy as np

name = 'find.png' 
template = cv2.imread(name,0)
face_w, face_h = template.shape[::-1]

cv2.namedWindow('image')

cap = cv2.VideoCapture(0)

threshold = 1
ret = True

while ret :
    ret, img = cap.read()

    #flip the image  ! optional 
    img = cv2.flip(img,1)

    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)

    if len(res):
        location = np.where( res >= threshold)
        for i in zip(*location[::-1]):
            #puting  rectangle on recognized erea 
            cv2.rectangle(img, pt, (pt[0] + face_w, pt[1] + face_h), (0,0,255), 2)

    cv2.imshow('image',img)
    k = cv2.waitKey(5) & 0xFF
    if k == 27:
        break
cv2.destroyAllWindows()

我真的解决了。我忘了我在这里贴了一个问题

from picamera.array import PiRGBArray
from picamera import PiCamera
from matplotlib import pyplot as plt

import time
import cv2
import numpy as np


# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))

template = cv2.imread('mario_coin.png', 0)


# allow the camera to warmup
time.sleep(0.1)

# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr",
                                       use_video_port=True):
    # grab the raw NumPy array representing the image,
    # then initialize the timestamp
    # and occupied/unoccupied text
    image = frame.array

    # we do something here
    # we get the image or something then run some matching
    # if we get a match, we draw a square on it or something
    img_rbg = image

    img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)


    template = cv2.imread("mario_coin.png", 0)
    w, h = template.shape[::-1]

    res = cv2.matchTemplate(img_gray, template, cv2.TM_CCOEFF_NORMED)

    threshold = 0.8

    loc = np.where(res >= threshold)

    for pt in zip(*loc[::-1]):
        cv2.rectangle(image, (pt[1]. pt[0]), (pt[1] + w, pt[0] + h),
                      (0,0,255), 2)

    # show the frame
    cv2.imshow("Frame", img_rbg)
    key = cv2.waitKey(1) & 0xFF

    # clear the stream in preparation for the next frame
    rawCapture.truncate(0)

    # if the `q` key was pressed, break from the loop
    if key == ord("q"):
        break

我真的解决了。我忘了我在这里贴了一个问题

from picamera.array import PiRGBArray
from picamera import PiCamera
from matplotlib import pyplot as plt

import time
import cv2
import numpy as np


# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))

template = cv2.imread('mario_coin.png', 0)


# allow the camera to warmup
time.sleep(0.1)

# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr",
                                       use_video_port=True):
    # grab the raw NumPy array representing the image,
    # then initialize the timestamp
    # and occupied/unoccupied text
    image = frame.array

    # we do something here
    # we get the image or something then run some matching
    # if we get a match, we draw a square on it or something
    img_rbg = image

    img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)


    template = cv2.imread("mario_coin.png", 0)
    w, h = template.shape[::-1]

    res = cv2.matchTemplate(img_gray, template, cv2.TM_CCOEFF_NORMED)

    threshold = 0.8

    loc = np.where(res >= threshold)

    for pt in zip(*loc[::-1]):
        cv2.rectangle(image, (pt[1]. pt[0]), (pt[1] + w, pt[0] + h),
                      (0,0,255), 2)

    # show the frame
    cv2.imshow("Frame", img_rbg)
    key = cv2.waitKey(1) & 0xFF

    # clear the stream in preparation for the next frame
    rawCapture.truncate(0)

    # if the `q` key was pressed, break from the loop
    if key == ord("q"):
        break