Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何改进表面缺陷的检测?_Python_Opencv - Fatal编程技术网

Python 如何改进表面缺陷的检测?

Python 如何改进表面缺陷的检测?,python,opencv,Python,Opencv,首先,这是我的原始图像,我试图检测拉丝铝表面的缺陷(平行线)。 以下是我采取的步骤: 高斯模糊 放大图像 将图像转换为灰度 变形闭合运算 再扩张 图像的差异 Canny边缘检测 寻找轮廓 在轮廓周围画一条绿线 这是我的密码: import numpy as np import cv2 from matplotlib import pyplot as plt import imutils path = '' path_output = '' img_bgr = cv2.imread(path)

首先,这是我的原始图像,我试图检测拉丝铝表面的缺陷(平行线)。

以下是我采取的步骤:

  • 高斯模糊
  • 放大图像
  • 将图像转换为灰度
  • 变形闭合运算
  • 再扩张
  • 图像的差异
  • Canny边缘检测
  • 寻找轮廓
  • 在轮廓周围画一条绿线
  • 这是我的密码:

    import numpy as np
    import cv2
    from matplotlib import pyplot as plt
    import imutils
    path = ''
    path_output = ''
    
    img_bgr = cv2.imread(path)
    plt.imshow(img_bgr)
    
    # bgr to rgb
    img_rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)
    plt.imshow(img_rgb)
    
    # Converting to grayscale
    img_just_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
    
    # Displaying the grayscale image
    plt.imshow(img_just_gray, cmap='gray')
    
    # Gaussian Blur
    ksize_w = 13
    ksize_h = 13
    
    img_first_gb = cv2.GaussianBlur(img_rgb, (ksize_w,ksize_h), 0, 0, cv2.BORDER_REPLICATE);
    plt.imshow(img_first_gb)
    
    # Dilate the image
    
    dilated_img = cv2.dilate(img_first_gb, np.ones((11,11), np.uint8))
    plt.imshow(dilated_img)
    
    # Converting to grayscale
    img_gray_operated = cv2.cvtColor(dilated_img, cv2.COLOR_BGR2GRAY)
    
    # Displaying the grayscale image
    plt.imshow(img_gray_operated, cmap='gray')
    
    # closing:
    kernel_closing = np.ones((7,7),np.uint8)
    img_closing = cv2.morphologyEx(img_gray_operated, cv2.MORPH_CLOSE, kernel_closing)
    plt.imshow(img_closing, cmap='gray')
    
    # dilation:
    # add pixels to the boundaries of objects in an image
    kernel_dilation = np.ones((3,3),np.uint8)
    img_dilation2 = cv2.dilate(img_closing, kernel_dilation, iterations = 1)
    plt.imshow(img_dilation2, cmap='gray')
    
    diff_img = 255 - cv2.absdiff(img_just_gray, img_dilation2)
    plt.imshow(diff_img, cmap='gray')
    
    # canny
    edgesToFindImage = img_dilation2
    
    v = np.median(img_just_gray)
    #print(v)
    sigma = 0.33
    lower_thresh = int(max(0,(1.0-sigma)*v))
    higher_thresh = int(min(255,(1.0+sigma)*v))
    
    img_edges =  cv2.Canny(edgesToFindImage, lower_thresh, higher_thresh)
    plt.imshow(img_edges, cmap='gray')
    
    kernel_dilation2 = np.ones((2,2),np.uint8)
    img_dilation2 = cv2.dilate(img_edges, kernel_dilation, iterations = 2)
    plt.imshow(img_dilation2, cmap='gray')
    
    # find contours
    contoursToFindImage = img_dilation2
    
    (_, cnts, _) = cv2.findContours(contoursToFindImage.copy(), cv2.RETR_EXTERNAL,
            cv2.CHAIN_APPROX_SIMPLE)
    print(type(cnts))
    print(len(cnts))
    
    # -1 for all
    cntsWhichOne = -1
    
    # -1 for infill
    # >0 for edge thickness
    cntsInfillOrEdgeThickness = 3
    
    img_drawing_contours_on_rgb_image = cv2.drawContours(img_rgb.copy(), cnts, cntsWhichOne, (0, 255, 0), cntsInfillOrEdgeThickness)
    plt.imshow(img_drawing_contours_on_rgb_image)
    
    这就是结果


    如何改进此检测?有没有更有效的方法来检测行?

    在Python OpenCV中有一种方法。如果距离很近,应该使用自适应阈值、形态学来清理小区域并跳过canny边缘

    输入:


    阈值图像:

    形态学清理图像:

    原始图像上的线条:

    黑色背景上的线条:


    您希望如何改进?您对当前解决方案的哪些方面不满意?@Triarion始终无法检测到线路。有没有办法从图像中破坏这个刷过的表面?如果是这样的话,我想我可以在图像差异部分使用它来改进直线检测。
    import cv2
    import numpy as np
    
    # load image
    img = cv2.imread('scratches.jpg')
    
    # convert to grayscale
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
    # adaptive threshold 
    thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, -35)
    
    # apply morphology
    kernel = np.ones((3,30),np.uint8)
    morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
    kernel = np.ones((3,35),np.uint8)
    morph = cv2.morphologyEx(morph, cv2.MORPH_OPEN, kernel)
    
    # get hough line segments
    threshold = 25
    minLineLength = 10
    maxLineGap = 20
    lines = cv2.HoughLinesP(morph, 1, 30*np.pi/360, threshold, minLineLength, maxLineGap)
    
    # draw lines
    linear1 = np.zeros_like(thresh)
    linear2 = img.copy()
    for [line] in lines:
        x1 = line[0]
        y1 = line[1]
        x2 = line[2]
        y2 = line[3]
        cv2.line(linear1, (x1,y1), (x2,y2), 255, 1)
        cv2.line(linear2, (x1,y1), (x2,y2), (0,0,255), 1)
    
    print('number of lines:',len(lines))
    
    # save resulting masked image
    cv2.imwrite('scratches_thresh.jpg', thresh)
    cv2.imwrite('scratches_morph.jpg', morph)
    cv2.imwrite('scratches_lines1.jpg', linear1)
    cv2.imwrite('scratches_lines2.jpg', linear2)
    
    # display result
    cv2.imshow("thresh", thresh)
    cv2.imshow("morph", morph)
    cv2.imshow("lines1", linear1)
    cv2.imshow("lines2", linear2)
    cv2.waitKey(0)
    cv2.destroyAllWindows()