Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/302.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何使用OpenCv获取图像中新出现的对象列表?_Python_Opencv - Fatal编程技术网

Python 如何使用OpenCv获取图像中新出现的对象列表?

Python 如何使用OpenCv获取图像中新出现的对象列表?,python,opencv,Python,Opencv,我试图确定一张照片中新出现的物体列表。该计划是从原始图像中获得多个裁剪图像,并将其输入到用于目标检测的神经网络。现在,我在提取出现在帧中的对象时遇到问题 import cv2 as cv import matplotlib.pyplot as plt def mdisp(image): plt.imshow(image) plt.show() im1 = cv.imread('images/litter-before.jpg') mdisp(im1) print(im1.sh

我试图确定一张照片中新出现的物体列表。该计划是从原始图像中获得多个裁剪图像,并将其输入到用于目标检测的神经网络。现在,我在提取出现在帧中的对象时遇到问题

import cv2 as cv
import matplotlib.pyplot as plt

def mdisp(image):
    plt.imshow(image)
    plt.show()

im1 = cv.imread('images/litter-before.jpg')
mdisp(im1)
print(im1.shape)
im2 = cv.imread('images/litter-after.jpg')
mdisp(im2)
print(im2.shape)
backsub1=cv.createBackgroundSubtractorMOG2()
backsub2=cv.createBackgroundSubtractorKNN()
fgmask = backsub1.apply(im1)
fgmask = backsub1.apply(im2)
print(fgmask.shape)
mdisp(fgmask)
new_image = im2 * (fgmask[:,:,None].astype(im2.dtype))
mdisp(new_image)

理想情况下,我想得到一个红色圆圈内的项目裁剪图片。我如何使用OpenCv实现它


这里有一种方法,直接减去两帧。其思想是首先将图像转换为
灰度
,然后稍微模糊一点以忽略噪声
减去两个帧,
阈值
差值,然后查找大于某个区域阈值的最大的水滴

让我们看看:

import cv2
import numpy as np

# image path
path = "C:/opencvImages/"
fileName01 = "01.jpg"
fileName02 = "02.jpg"

# Read the2 images in default mode:
image01 = cv2.imread(path + fileName01)
image02 = cv2.imread(path + fileName02)

# Store a copy of the last frame for results drawing:
inputCopy = image02.copy()

# Convert RGB images to grayscale:
grayscaleImage01 = cv2.cvtColor(image01, cv2.COLOR_BGR2GRAY)
grayscaleImage02 = cv2.cvtColor(image02, cv2.COLOR_BGR2GRAY)

# Convert RGB images to grayscale:
filterSize = 5
imageMedian01 = cv2.medianBlur(grayscaleImage01, filterSize)
imageMedian02 = cv2.medianBlur(grayscaleImage02, filterSize)
现在有了灰度模糊的帧。接下来,我们需要计算这些帧之间的差异。我不想丢失数据,所以我必须小心这里的数据类型。请记住,这些是灰度,
uint8
矩阵,但这种差异可能会产生负值。让我们将矩阵转换为
float
s,取其差,然后将此矩阵转换为uint8:

# uint8 to float32 conversion:
imageMedian01 = imageMedian01.astype('float32')
imageMedian02 = imageMedian02.astype('float32')

# Take the difference and convert back to uint8
imageDifference = np.clip(imageMedian01 - imageMedian02, 0, 255)
imageDifference = imageDifference.astype('uint8')
这将为您提供帧差异:

让我们对其设置阈值以获得二值图像。我使用的阈值是
127
,因为它是8位范围的中心:

threshValue = 127
_, binaryImage = cv2.threshold(imageDifference, threshValue, 255, cv2.THRESH_BINARY)
这是二进制图像:

我们在这里寻找最大的斑点,让我们找到
斑点/轮廓
并过滤小斑点。让我们将最小面积设置为10像素:

# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
    cv2.connectedComponentsWithStats(binaryImage, connectivity=4)

# Set the minimum pixels for the area filter:
minArea = 10

# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]

# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')

# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(filteredImage, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)

contours_poly = [None] * len(contours)
boundRect = []

# Alright, just look for the outer bounding boxes:
for i, c in enumerate(contours):
    if hierarchy[0][i][3] == -1:
        contours_poly[i] = cv2.approxPolyDP(c, 3, True)
        boundRect.append(cv2.boundingRect(contours_poly[i]))


# Draw the bounding boxes on the (copied) input image:
for i in range(len(boundRect)):
    print(boundRect[i])
    color = (0, 255, 0)
    cv2.rectangle(inputCopy, (int(boundRect[i][0]), int(boundRect[i][1])), \
                  (int(boundRect[i][0] + boundRect[i][2]), int(boundRect[i][1] + boundRect[i][3])), color, 1)
查看结果:


请提供清晰的输入图像该图像是使用raspberry pi捕获的。我试图提高质量,但没有多大改善。我确实看到了与当前图像的区别,我们可以在其周围画一个矩形吗?提供文件
litter-before.jpg
litter-after.jpg
,以便其他人可以调试您的代码。上传的原始文件litter-before.jpg和litter-after.jpg是我使用的谢谢!!正是我想要的。我做了一些更改以增加框区域,并为find contours return
img,contours,hierarchy=cv2添加了一个变量。findContours
@Ravi没问题,我的朋友。很高兴我能帮忙。