Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 亮点边缘检测_Python_Opencv_Image Processing_Computer Vision - Fatal编程技术网

Python 亮点边缘检测

Python 亮点边缘检测,python,opencv,image-processing,computer-vision,Python,Opencv,Image Processing,Computer Vision,我有一张电路板的X光图像,我正试图分割一些元件,并找到其中的空洞(空洞是图像上的亮点)。我成功地分离了组件,但我很难得到空洞的轮廓 到目前为止,我发现它使用拉普拉斯边缘检测器结合高斯和中值滤波器,但仍然有太多的噪音检测。我怎样才能摆脱它 在第一张图像上,你们可以看到我使用大津阈值得到的轮廓,这是迄今为止最好的结果,但我认为这不是一个好方法,因为用户无论如何都不会影响行为,因为阈值是自动计算的。在这张图片的顶部,轮廓并没有包围整个空隙(白点) 从2到8的图像是我如何修改图像的步骤。我使用的是高斯模

我有一张电路板的X光图像,我正试图分割一些元件,并找到其中的空洞(空洞是图像上的亮点)。我成功地分离了组件,但我很难得到空洞的轮廓

到目前为止,我发现它使用拉普拉斯边缘检测器结合高斯和中值滤波器,但仍然有太多的噪音检测。我怎样才能摆脱它

在第一张图像上,你们可以看到我使用大津阈值得到的轮廓,这是迄今为止最好的结果,但我认为这不是一个好方法,因为用户无论如何都不会影响行为,因为阈值是自动计算的。在这张图片的顶部,轮廓并没有包围整个空隙(白点)

从2到8的图像是我如何修改图像的步骤。我使用的是高斯模糊和中值模糊,这可能会产生很多噪声,但即使没有它,结果也基本相同。最后一步是拉普拉斯边缘检测和形态学闭合

有没有更好的办法

以下是我的输入参数:

            package.voids.contours, package.voids.hierarchy = self.find_voids_inside_component(
            cropped,
            clahe_clip_limit=1,
            clahe_tile_grid_size=(3, 3),
            laplacian_ksize=11,
            closing_ksize=2,
            closing_iterations=2,
            debug_mode=True,
            fxy=1)
下面是函数本身:

def find_voids_inside_component(self,
                                cropped,
                                clahe_clip_limit=2,
                                clahe_tile_grid_size=(3, 3),
                                laplacian_ksize=15,
                                closing_ksize=3,
                                closing_iterations=1,
                                debug_mode=False,
                                fxy=3):
    """
    This fuction calculates the ratio between the void area and the ball area
    :param fxy:
    :param closing_iterations:
    :param closing_ksize:
    :param laplacian_ksize:
    :param clahe_tile_grid_size:
    :param clahe_clip_limit:
    :param cropped:
    :param debug_mode: if True it will print additional info
    :return: contours, hierarchy
    """
    output.debug_show("Original image", cropped, debug_mode=debug_mode, fxy=fxy, waitkey=False)


    # PARAM: Median blur before enhancing image
    # get rid of salt-and-pepper noise using the median filter
    median_blur = cv2.medianBlur(cropped, ksize=5)

    # debug print
    output.debug_show("Median blur 2", median_blur, debug_mode=debug_mode, fxy=fxy, waitkey=False)

    # apply the smoothing
    # PARAM: The parameters of the Gaussian blur
    # ksize – Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd.
    #         Or, they can be zero’s and then they are computed from sigma* .
    #
    # sigmaX – Gaussian kernel standard deviation in X direction.
    #
    # sigmaY – Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to
    #          sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height ,
    #          respectively (see getGaussianKernel() for details); to fully control the result regardless
    #          of possible future
    #         modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.
    #
    # borderType – pixel extrapolation method (see borderInterpolate() for details).
    blur = cv2.GaussianBlur(cropped, (5, 5), 0)

    # debug print
    output.debug_show("Gauss blur", blur, debug_mode=debug_mode, fxy=fxy, waitkey=False)



    # improve the local contrast using CLAHE
    # create a CLAHE object (Arguments are optional).
    # PARAM: Contrast Limited Adaptive Histogram Equalization
    #  clipLimit – Threshold for contrast limiting.
    #  tileGridSize – Size of grid for histogram equalization. Input image will be divided into equally sized
    #                       rectangular tiles. tileGridSize defines the number of tiles in row and column.

    # good values (clipLimit=2.0, tileGridSize=(3, 3))
    clahe = cv2.createCLAHE(clipLimit=clahe_clip_limit, tileGridSize=clahe_tile_grid_size)
    # it is allso possible to use gaussian blur
    cl1 = clahe.apply(median_blur)

    # debug print
    output.debug_show("Enhanced Image", cl1, debug_mode=debug_mode, fxy=fxy, waitkey=False)

    # debug print -> convert gray scale to colormap
    color_map = cv2.applyColorMap(cl1, cv2.COLORMAP_JET)
    output.debug_show("Color map", color_map, debug_mode=debug_mode, fxy=fxy, waitkey=False)


    # use some edge detector to get the contours of void
    # color_map = cv2.cvtColor(color_map, cv2.COLOR_BGR2GRAY)
    # PARAM: The parameters of the Gaussian blur
    # ksize – Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd.
    #         Or, they can be zero’s and then they are computed from sigma* .
    #
    # sigmaX – Gaussian kernel standard deviation in X direction.
    #
    # sigmaY – Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to
    #          sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height ,
    #          respectively (see getGaussianKernel() for details); to fully control the result regardless
    #          of possible future
    #         modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.
    #
    # borderType – pixel extrapolation method (see borderInterpolate() for details).
    blur = cv2.GaussianBlur(cl1, (5, 5), 0)

    # PARAM: Laplacian edge detector
    # ddepth – Desired depth of the destination image.
    # ksize – Aperture size used to compute the second-derivative filters. See getDerivKernels() for details.
    #        The size must be positive and odd.
    #
    # scale – Optional scale factor for the computed Laplacian values. By default, no scaling is applied.
    #        See getDerivKernels() for details.
    #
    # delta – Optional delta value that is added to the results prior to storing them in dst .
    #
    # borderType – Pixel extrapolation method. See borderInterpolate() for details.
    # ToDo: Try more edge detectors Solber, Canny
    # edges = cv2.Canny(blur,threshold1=50, threshold2=100)

    edges = cv2.Laplacian(median_blur, cv2.CV_8U, ksize=laplacian_ksize)
    # abs_edges64f = np.absolute(edges)
    # edges_8u = np.uint8(abs_edges64f)

    # debug print
    output.debug_show("Edges", edges, debug_mode=debug_mode, fxy=fxy, waitkey=False)

    # use closing
    kernel = np.ones((closing_ksize, closing_ksize), np.uint8)
    closing = cv2.morphologyEx(edges, cv2.MORPH_CLOSE, kernel, iterations=closing_iterations)

    # debug print
    output.debug_show("Closing", closing, debug_mode=debug_mode, fxy=fxy, waitkey=True)

    # get contours
    im2, contours, hierarchy = cv2.findContours(closing, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)

    # print("Number of contours: ", len(contours))

    return contours, hierarchy

我正在使用Python 3和OpenCV

给定x射线的预期结果是什么?您可以尝试高斯差分。我在应用clahe后试用了它,它看起来很有希望如果你将示例输入附加为可用的图像(即,不是带有圆角和巨大阴影的窗口的屏幕截图),那么用户定义的阈值呢?或者选择Otsu,允许用户在之后更改阈值?您希望允许多少用户交互?有一些技术,用户可以选择一些粗糙区域,并自动执行精细分割。