Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/298.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何使用cv2计算图像中的不同颗粒?_Python_Opencv_Image Processing_Object Detection_Opencv3.0 - Fatal编程技术网

Python 如何使用cv2计算图像中的不同颗粒?

Python 如何使用cv2计算图像中的不同颗粒?,python,opencv,image-processing,object-detection,opencv3.0,Python,Opencv,Image Processing,Object Detection,Opencv3.0,我有一张图片,上面有谷类食品: 图像有: 3个核桃 3个葡萄干 3个南瓜子 27种外观相似的谷物 我希望使用opencv单独计算它们,我不想识别它们。到目前为止,我已经定制了AdaptiveThreshold方法来计算所有种子,但不确定如何单独计算。这是我的脚本: import cv2 import numpy as np import matplotlib.pyplot as plt img = cv2.imread('/Users/vaibhavsaxena/Desktop/Scre

我有一张图片,上面有谷类食品:

图像有:

  • 3个核桃
  • 3个葡萄干
  • 3个南瓜子
  • 27种外观相似的谷物
我希望使用opencv单独计算它们,我不想识别它们。到目前为止,我已经定制了AdaptiveThreshold方法来计算所有种子,但不确定如何单独计算。这是我的脚本:

import cv2
import numpy as np
import matplotlib.pyplot as plt

img = cv2.imread('/Users/vaibhavsaxena/Desktop/Screen Shot 2021-04-27 at 12.22.46.png', 0)
#img = cv2.fastNlMeansDenoisingColored(img,None,10,10,7,21)
windowSize = 31
windowConstant = 40
mask = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)
plt.imshow(mask)

stats = cv2.connectedComponentsWithStats(mask, 8)[2]
label_area = stats[1:, cv2.CC_STAT_AREA]

min_area, max_area = 345, max(list(label_area))  # min/max for a single circle
singular_mask = (min_area < label_area) & (label_area <= max_area)
circle_area = np.mean(label_area[singular_mask])

n_circles = int(np.sum(np.round(label_area / circle_area)))

print('Total circles:', n_circles)

36
导入cv2
将numpy作为np导入
将matplotlib.pyplot作为plt导入
img=cv2.imread('/Users/vaibhavsaxena/Desktop/Screen Shot 2021-04-27 at 12.22.46.png',0)
#img=cv2。FastNL表示有色噪声(img,无,10,10,7,21)
WindowsSize=31
窗口常数=40
掩码=cv2.自适应阈值(img,255,cv2.自适应阈值平均值,cv2.阈值二进制值,窗口大小,窗口常数)
plt.imshow(面具)
stats=cv2.连接的组件swithstats(掩码,8)[2]
label\u area=stats[1:,cv2.CC\u STAT\u area]
最小面积,最大面积=345,最大值(列表(标签面积))\min/max用于单个圆

单数遮罩=(最小面积<标签面积)&(label_area正如HansHirse所建议的那样,你的照明不好,试着对你拍照的条件进行标准化。然而,有一种方法可以在一定程度上对照明进行标准化,并使其尽可能均匀。这种方法称为增益分割。其思想是,你尝试建立一个背景模型,然后对每个背景进行加权通过该模型输入像素。在图像的大部分时间内,输出增益应相对恒定。让我们尝试一下:

# imports:
import cv2
import numpy as np

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()

# Get local maximum:
kernelSize = 30
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
localMax = cv2.morphologyEx(inputImage, cv2.MORPH_CLOSE, maxKernel, None, None, 1, cv2.BORDER_REFLECT101)

# Perform gain division
gainDivision = np.where(localMax == 0, 0, (inputImage/localMax))

# Clip the values to [0,255]
gainDivision = np.clip((255 * gainDivision), 0, 255)

# Convert the mat type from float to uint8:
gainDivision = gainDivision.astype("uint8")
必须小心这些数据类型及其转换。结果如下:

如您所见,大部分背景现在是均匀的,这很酷,因为现在我们可以应用一种简单的阈值方法。让我们尝试一下大津的阈值化
,以获得元素的良好二进制掩码:

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(gainDivision, cv2.COLOR_BGR2GRAY)

# Get binary image via Otsu:
_, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
print("Elements found: "+str(len(rectanglesList)))
Elements found: 37
这将产生这个二进制掩码:

通过应用
形态学
,可以改进遮罩,让我们尝试使用柔和的
关闭操作来加入这些斑点:

# Set kernel (structuring element) size:
kernelSize = 3
# Set morph operation iterations:
opIterations = 2

# Get the structuring element:
morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))

# Perform closing:
binaryImage = cv2.morphologyEx( binaryImage, cv2.MORPH_CLOSE, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101 )
结果是:

好的,现在,为了完整起见,让我们尝试计算所有元素的
边界矩形
。我们还可以过滤小面积的斑点,并将每个边界矩形存储在列表中:

# Find the blobs on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Store the bounding rectangles here:
rectanglesList = []

# Look for the outer bounding boxes (no children):
for _, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)
    # Set a min area threshold:
    minArea = 100

    if currentArea > minArea:

        # Approximate the contour to a polygon:
        contoursPoly = cv2.approxPolyDP(c, 3, True)
        # Get the polygon's bounding rectangle:
        boundRect = cv2.boundingRect(contoursPoly)

        # Store rectangles in list:
        rectanglesList.append(boundRect)

        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # Set bounding rect:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                   (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )

        cv2.imshow("Rectangles", inputImageCopy)
        cv2.waitKey(0)
最后一幅图是:

这是检测到的元素总数:

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(gainDivision, cv2.COLOR_BGR2GRAY)

# Get binary image via Otsu:
_, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
print("Elements found: "+str(len(rectanglesList)))
Elements found: 37

正如您所见,这是一个误报。一点颗粒的阴影被检测为实际颗粒。也许调整最小面积可以解决问题。或者,如果您对每个颗粒进行分类,您可以过滤这种噪声。

您的照明不好,正如HansHirse所建议的,请尝试正常化条件拍摄照片的方法。但是,有一种方法可以在某种程度上使照明正常化,并使其尽可能均匀。这种方法称为增益分割。其思想是尝试建立背景模型,然后根据该模型对每个输入像素进行加权。在图像的大部分时间内,输出增益应相对恒定。让我们试一试:

# imports:
import cv2
import numpy as np

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()

# Get local maximum:
kernelSize = 30
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
localMax = cv2.morphologyEx(inputImage, cv2.MORPH_CLOSE, maxKernel, None, None, 1, cv2.BORDER_REFLECT101)

# Perform gain division
gainDivision = np.where(localMax == 0, 0, (inputImage/localMax))

# Clip the values to [0,255]
gainDivision = np.clip((255 * gainDivision), 0, 255)

# Convert the mat type from float to uint8:
gainDivision = gainDivision.astype("uint8")
必须小心这些数据类型及其转换。结果如下:

如您所见,大部分背景现在是均匀的,这很酷,因为现在我们可以应用一种简单的阈值方法。让我们尝试一下大津的阈值化
,以获得元素的良好二进制掩码:

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(gainDivision, cv2.COLOR_BGR2GRAY)

# Get binary image via Otsu:
_, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
print("Elements found: "+str(len(rectanglesList)))
Elements found: 37
这将产生这个二进制掩码:

通过应用
形态学
,可以改进遮罩,让我们尝试使用柔和的
关闭操作来加入这些斑点:

# Set kernel (structuring element) size:
kernelSize = 3
# Set morph operation iterations:
opIterations = 2

# Get the structuring element:
morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))

# Perform closing:
binaryImage = cv2.morphologyEx( binaryImage, cv2.MORPH_CLOSE, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101 )
结果是:

好的,现在,为了完整起见,让我们尝试计算所有元素的
边界矩形
。我们还可以过滤小面积的斑点,并将每个边界矩形存储在列表中:

# Find the blobs on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Store the bounding rectangles here:
rectanglesList = []

# Look for the outer bounding boxes (no children):
for _, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)
    # Set a min area threshold:
    minArea = 100

    if currentArea > minArea:

        # Approximate the contour to a polygon:
        contoursPoly = cv2.approxPolyDP(c, 3, True)
        # Get the polygon's bounding rectangle:
        boundRect = cv2.boundingRect(contoursPoly)

        # Store rectangles in list:
        rectanglesList.append(boundRect)

        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # Set bounding rect:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                   (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )

        cv2.imshow("Rectangles", inputImageCopy)
        cv2.waitKey(0)
最后一幅图是:

这是检测到的元素总数:

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(gainDivision, cv2.COLOR_BGR2GRAY)

# Get binary image via Otsu:
_, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
print("Elements found: "+str(len(rectanglesList)))
Elements found: 37

正如您所见,这是一个假阳性。一个颗粒的一点阴影被检测为实际颗粒。也许调整最小面积可以解决这个问题。或者,如果您要对每个颗粒进行分类,您可以过滤这种噪声。

首先,使用放置在位置附近的某个光源重新拍摄图像相机的旋转。这些由来自左侧的光线造成的沉重阴影使事情变得复杂了100倍。阈值设置将很容易,通过轮廓大小(核桃)、颜色/饱和度(葡萄干、南瓜籽[可能])来区分单个对象类,剩下的部分是谷物。首先,使用相机位置附近的光源重新拍摄图像。这些来自左侧的光线造成的阴影使事情变得更加复杂100倍。阈值设置将很容易,并且可以根据大小区分单个对象类别轮廓(核桃),颜色/饱和度(葡萄干,南瓜籽[可能]),剩下的部分是谷物。