Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 没有足够的背景过滤_Python_Opencv - Fatal编程技术网

Python 没有足够的背景过滤

Python 没有足够的背景过滤,python,opencv,Python,Opencv,我试图过滤呈现电缆的图像的背景。我试着做到以下几点: 从颜色转换为灰色 应用cv2.Laplacian或2次cv2.Sobel查找两个方向的边 应用阈值化cv2.THRESH\u BINARY(\u INV),cv2.THRESH\u OTSU 最后,我尝试使用cv2.Canny和cv2.HoughLinesP 总的来说,结果一点也不令人满意。我将给出两幅图像的示例: 以及我的脚本的输出: 我还使用了config中的值,但结果差别不大 以下是我设法完成的小脚本: import cv2 i

我试图过滤呈现电缆的图像的背景。我试着做到以下几点:

  • 从颜色转换为灰色
  • 应用
    cv2.Laplacian
    或2次
    cv2.Sobel
    查找两个方向的边
  • 应用阈值化
    cv2.THRESH\u BINARY(\u INV)
    cv2.THRESH\u OTSU
  • 最后,我尝试使用
    cv2.Canny
    cv2.HoughLinesP
  • 总的来说,结果一点也不令人满意。我将给出两幅图像的示例:

    以及我的脚本的输出:

    我还使用了
    config
    中的值,但结果差别不大

    以下是我设法完成的小脚本:

    import cv2
    import matplotlib.pyplot as plt
    import numpy as np
    
    def img_show(images, cmap=None):
      fig = plt.figure(figsize=(17, 10))
      root = 3 # len(images) ** 0.5
      for i, img in enumerate(images):
        ax = fig.add_subplot(root, root, i + 1)
        ax.imshow(img, cmap=cmap[i])
      plt.show()
    
    
    class Config:
      scale = 0.4
      min_threshold = 120
      max_threshold = 200
      canny_min_threshold = 100
      canny_max_threshold = 200
    
    
    config = Config()
    
    
    def find_lines(img, rgb_img):
      dst = cv2.Canny(img, config.canny_min_threshold, config.canny_max_threshold)
    
      cdstP = np.copy(rgb_img)
    
      lines = cv2.HoughLinesP(dst, 1, np.pi / 180, 150, None, 0, 0)
    
      lines1 = lines[:, 0, :]
      for x1, y1, x2, y2 in lines1[:]:
        cv2.line(cdstP, (x1, y1), (x2, y2), (255, 0, 0), 5)
      return cdstP
    
    
    if __name__ == "__main__":
        bgr_img = cv2.imread('DJI_0009.JPG')
    
        bgr_img = cv2.resize(bgr_img, (0, 0), bgr_img, config.scale, config.scale)
    
        rgb_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2RGB)
        gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
    
        # _, threshold = cv2.threshold(gray_img, config.min_threshold, config.max_threshold, cv2.THRESH_BINARY)
    
        # laplacian = cv2.Laplacian(rgb_img, cv2.CV_8UC1)
        sobelx = cv2.Sobel(gray_img, cv2.CV_8UC1, 1, 0)
        sobely = cv2.Sobel(gray_img, cv2.CV_8UC1, 0, 1)
        blended = cv2.addWeighted(src1=sobelx, alpha=0.5, src2=sobely, beta=0.5, gamma=0)
    
        _, threshold = cv2.threshold(blended, config.min_threshold, config.max_threshold,
                                     cv2.THRESH_BINARY | cv2.THRESH_OTSU)
    
        p1 = find_lines(threshold, rgb_img)
        p2 = find_lines(blended, rgb_img)
        p3 = find_lines(gray_img, rgb_img)
    
        plots = [rgb_img, p1, p2, p3]
        cmaps = [None] + ['gray'] * (len(plots) - 1)
        img_show(plots, cmaps)
    
    
    我想我需要做更好的过滤。然而,我也尝试了图像分割,但结果一点也不乐观。 关于如何改进这一点有什么想法吗?
    谢谢

    在Python/OpenCV中有一种方法可以做到这一点。I阈值,然后可选地使用形态学进行清理。然后得到轮廓,并为每个轮廓计算其旋转矩形。然后获得旋转矩形的尺寸,并计算纵横比(最大尺寸/最小尺寸)和面积(可选)。然后,我在纵横比(和可选面积)上设置阈值,并仅保留通过的轮廓)

    输入:


    阈值图像:

    形态学清理图像:

    结果图像:


    我建议您使用适当的阈值,然后计算轮廓并根据面积和某些长度度量进行过滤,或者使用连接的组件根据纵横比和面积进行过滤。您是否详细说明关于面积和长度度量的过滤
    ,或者使用连接的组件根据纵横比和面积进行过滤
    。我应该寻找什么样的过滤?
    import cv2
    import numpy as np
    
    image = cv2.imread("DCIM-100-MEDIA-DJI-0009-JPG.jpg")
    hh, ww = image.shape[:2]
    
    # convert to grayscale
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    
    # create a binary thresholded image
    thresh = cv2.threshold(gray, 64, 255, cv2.THRESH_BINARY)[1]
    
    # invert so line is white on black background
    thresh = 255 - thresh
    
    # apply morphology
    kernel = np.ones((11,11), np.uint8)
    clean = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
    
    # get external contours
    contours = cv2.findContours(clean, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    contours = contours[0] if len(contours) == 2 else contours[1]
    
    area_thresh = ww / 2
    aspect_thresh = ww / 30
    print(area_thresh,aspect_thresh)
    print('')
    result = image.copy()
    for c in contours:
        
        # get rotated rectangle from contour
        # get its dimensions
        rotrect = cv2.minAreaRect(c)
        (center), (dim1,dim2), angle = rotrect
        maxdim = max(dim1,dim2)
        mindim = min(dim1,dim2)
        area = dim1 * dim2
        if mindim != 0:
            aspect = maxdim / mindim
        #print(area, aspect)
    
        #if area > area_thresh and aspect > aspect_thresh:
        if aspect > aspect_thresh:
            # draw contour on input
            cv2.drawContours(result,[c],0,(0,0,255),3)
            print(area, aspect)
    
    # save result
    cv2.imwrite("DCIM-100-MEDIA-DJI-0009-JPG_thresh.jpg",thresh)
    cv2.imwrite("DCIM-100-MEDIA-DJI-0009-JPG_clean.jpg",clean)
    cv2.imwrite("DCIM-100-MEDIA-DJI-0009-JPG_result.jpg",result)
    
    # display result
    cv2.imshow("thresh", thresh)
    cv2.imshow("clean", clean)
    cv2.imshow("result", result)
    cv2.waitKey(0)
    cv2.destroyAllWindows()