Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/364.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
使用opencv Python删除图像的背景_Python_Image_Opencv_Numpy_Image Processing - Fatal编程技术网

使用opencv Python删除图像的背景

使用opencv Python删除图像的背景,python,image,opencv,numpy,image-processing,Python,Image,Opencv,Numpy,Image Processing,我有两张图片,一张只有背景,另一张有背景+可检测物体(在我的例子中是一辆汽车)。下面是图片 我正在尝试删除背景,这样我在生成的图像中只有一辆车。下面是我试图获得所需结果的代码 import numpy as np import cv2 original_image = cv2.imread('IMG1.jpg', cv2.IMREAD_COLOR) gray_original = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY) backgr

我有两张图片,一张只有背景,另一张有背景+可检测物体(在我的例子中是一辆汽车)。下面是图片

我正在尝试删除背景,这样我在生成的图像中只有一辆车。下面是我试图获得所需结果的代码

import numpy as np
import cv2


original_image = cv2.imread('IMG1.jpg', cv2.IMREAD_COLOR)
gray_original = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
background_image = cv2.imread('IMG2.jpg', cv2.IMREAD_COLOR)
gray_background = cv2.cvtColor(background_image, cv2.COLOR_BGR2GRAY)

foreground = np.absolute(gray_original - gray_background)
foreground[foreground > 0] = 255

cv2.imshow('Original Image', foreground)
cv2.waitKey(0)
将两幅图像相减得到的图像为

问题就在这里。预期生成的图像应仅为汽车。 另外,如果你深入观察这两幅图像,你会发现它们并不完全相同,也就是说,相机移动了一点,所以背景被干扰了一点。我的问题是,对于这两幅图像,我如何减去背景呢。我现在不想使用grabCut或backgroundSubtractorMOG算法,因为我现在不知道这些算法内部发生了什么

我想做的是得到下面的结果图像


另外,如果可能的话,请给我一个这样做的一般方法,不仅仅是在这个特定的情况下,也就是说,我在一个图像中有背景,在第二个图像中有背景+对象。最好的方法是什么。很抱歉问这么长的问题。

问题是您正在减去无符号8位整数的数组。此操作可能会溢出

展示

>>> import numpy as np
>>> a = np.array([[10,10]],dtype=np.uint8)
>>> b = np.array([[11,11]],dtype=np.uint8)
>>> a - b
array([[255, 255]], dtype=uint8)
因为您使用的是OpenCV,所以实现目标的最简单方法就是使用


我用OpenCV的算法解决了你的问题。你可以找到分水岭的理论和例子

首先,我选择了几个点(标记)来指示我要保留的对象的位置,以及背景的位置。此步骤是手动的,图像之间可能会有很大差异。此外,它需要一些重复,直到你得到想要的结果。我建议使用工具获取像素坐标。 然后我创建了一个零的空整数数组,大小与汽车图像相同。然后我给标记位置的像素赋值(1:background,[255192128,64]:car_parts)

注意:当我下载你的图片时,我不得不对它进行裁剪,以获得与汽车配套的图片。裁剪后,图像大小为400x601。这可能不是图像的大小,因此标记将关闭

之后我使用了分水岭算法。第一个输入是您的图像,第二个输入是标记图像(除标记位置外,所有位置均为零)。结果如下图所示。

我将值大于1的所有像素设置为255(汽车),其余像素(背景)设置为零。然后,我用一个3x3内核放大了获得的图像,以避免丢失汽车轮廓的信息。最后,我使用cv2.bitwise_and()函数将放大的图像用作原始图像的遮罩,结果显示在下图中:

这是我的密码:

import cv2
import numpy as np
import matplotlib.pyplot as plt

# Load the image
img = cv2.imread("/path/to/image.png", 3)

# Create a blank image of zeros (same dimension as img)
# It should be grayscale (1 color channel)
marker = np.zeros_like(img[:,:,0]).astype(np.int32)

# This step is manual. The goal is to find the points
# which create the result we want. I suggest using a
# tool to get the pixel coordinates.

# Dictate the background and set the markers to 1
marker[204][95] = 1
marker[240][137] = 1
marker[245][444] = 1
marker[260][427] = 1
marker[257][378] = 1
marker[217][466] = 1

# Dictate the area of interest
# I used different values for each part of the car (for visibility)
marker[235][370] = 255    # car body
marker[135][294] = 64     # rooftop
marker[190][454] = 64     # rear light
marker[167][458] = 64     # rear wing
marker[205][103] = 128    # front bumper

# rear bumper
marker[225][456] = 128
marker[224][461] = 128
marker[216][461] = 128

# front wheel
marker[225][189] = 192
marker[240][147] = 192

# rear wheel
marker[258][409] = 192
marker[257][391] = 192
marker[254][421] = 192

# Now we have set the markers, we use the watershed
# algorithm to generate a marked image
marked = cv2.watershed(img, marker)

# Plot this one. If it does what we want, proceed;
# otherwise edit your markers and repeat
plt.imshow(marked, cmap='gray')
plt.show()

# Make the background black, and what we want to keep white
marked[marked == 1] = 0
marked[marked > 1] = 255

# Use a kernel to dilate the image, to not lose any detail on the outline
# I used a kernel of 3x3 pixels
kernel = np.ones((3,3),np.uint8)
dilation = cv2.dilate(marked.astype(np.float32), kernel, iterations = 1)

# Plot again to check whether the dilation is according to our needs
# If not, repeat by using a smaller/bigger kernel, or more/less iterations
plt.imshow(dilation, cmap='gray')
plt.show()

# Now apply the mask we created on the initial image
final_img = cv2.bitwise_and(img, img, mask=dilation.astype(np.uint8))

# cv2.imread reads the image as BGR, but matplotlib uses RGB
# BGR to RGB so we can plot the image with accurate colors
b, g, r = cv2.split(final_img)
final_img = cv2.merge([r, g, b])

# Plot the final result
plt.imshow(final_img)
plt.show()

如果您有很多图像,您可能需要创建一个工具以图形方式注释标记,甚至需要创建一个自动查找标记的算法。

我建议使用OpenCV的grabcut算法。首先在前景和背景上画几条线,然后继续这样做,直到前景与背景充分分离。这里包括:
以及在本视频中:

图像在像素级是否完全相同?尝试一下,如果使用一个阈值,如
前景[foreground>20]=255,可以改善你的结果。你可以将输入图像附加到一个可用的形式吗?这个问题已经有了一个合适的答案,为什么你分配了一个赏金@DHShah01?@ZdaR它不起作用我猜你的减法不仅仅是生产汽车,这是一个汽车和背景的奇怪组合。对吗?如果是这样,你需要的是一个遮罩来应用到你的原始图像上。这是一个很酷的结果。您将使用什么样的算法自动查找标记?任何调查途径都会有帮助。干杯。最简单的方法是创建一个注释工具(GUI),您可以在其中单击要捕获的图像部分。如果有很多相似的图像,可以使用相同的标记点,然后纠正潜在偏移。一个全自动的系统需要一些关于你所寻找的东西的知识,所以我会选择一个ML算法,它会给一个单词(即car),它会尝试给图片中的物体标记。也许DL超越了所有其他ML选项,因为您需要许多步骤来实现这一点(即图像分类、标记识别等)。@TasosGlrs这是一个非常聪明的解决方案。干得好。我如何选择标记?请给出关于这个TASOS的说明,我正试图用android做同样的事情,你能帮我吗?
import cv2
import numpy as np
import matplotlib.pyplot as plt

# Load the image
img = cv2.imread("/path/to/image.png", 3)

# Create a blank image of zeros (same dimension as img)
# It should be grayscale (1 color channel)
marker = np.zeros_like(img[:,:,0]).astype(np.int32)

# This step is manual. The goal is to find the points
# which create the result we want. I suggest using a
# tool to get the pixel coordinates.

# Dictate the background and set the markers to 1
marker[204][95] = 1
marker[240][137] = 1
marker[245][444] = 1
marker[260][427] = 1
marker[257][378] = 1
marker[217][466] = 1

# Dictate the area of interest
# I used different values for each part of the car (for visibility)
marker[235][370] = 255    # car body
marker[135][294] = 64     # rooftop
marker[190][454] = 64     # rear light
marker[167][458] = 64     # rear wing
marker[205][103] = 128    # front bumper

# rear bumper
marker[225][456] = 128
marker[224][461] = 128
marker[216][461] = 128

# front wheel
marker[225][189] = 192
marker[240][147] = 192

# rear wheel
marker[258][409] = 192
marker[257][391] = 192
marker[254][421] = 192

# Now we have set the markers, we use the watershed
# algorithm to generate a marked image
marked = cv2.watershed(img, marker)

# Plot this one. If it does what we want, proceed;
# otherwise edit your markers and repeat
plt.imshow(marked, cmap='gray')
plt.show()

# Make the background black, and what we want to keep white
marked[marked == 1] = 0
marked[marked > 1] = 255

# Use a kernel to dilate the image, to not lose any detail on the outline
# I used a kernel of 3x3 pixels
kernel = np.ones((3,3),np.uint8)
dilation = cv2.dilate(marked.astype(np.float32), kernel, iterations = 1)

# Plot again to check whether the dilation is according to our needs
# If not, repeat by using a smaller/bigger kernel, or more/less iterations
plt.imshow(dilation, cmap='gray')
plt.show()

# Now apply the mask we created on the initial image
final_img = cv2.bitwise_and(img, img, mask=dilation.astype(np.uint8))

# cv2.imread reads the image as BGR, but matplotlib uses RGB
# BGR to RGB so we can plot the image with accurate colors
b, g, r = cv2.split(final_img)
final_img = cv2.merge([r, g, b])

# Plot the final result
plt.imshow(final_img)
plt.show()