Matlab中彩色线的去除

Matlab中彩色线的去除,matlab,image-processing,image-segmentation,hough-transform,Matlab,Image Processing,Image Segmentation,Hough Transform,我试图在Matlab中删除一系列图像中的彩色线(特别是黄蓝线)。可在此处找到示例图像: 我能够使用基本阈值分割出蓝色线段。我也能够分割出明亮的黄色圆圈内的黄色线段使用阈值。最后,我正在使用带有houghlines函数和遮罩的hough变换删除线段的其余元素 有没有一种更优雅的方式来实现这一点,或者我是不是被困在使用这种方法的组合中了 谢谢 编辑:我发现hough变换只是从我的图像中删除单个像素,而不是整个黄线。我考虑放大检测到的像素并检查相似性,但我担心黄线与背景颜色太相似(它的位置可能会发生

我试图在Matlab中删除一系列图像中的彩色线(特别是黄蓝线)。可在此处找到示例图像:

我能够使用基本阈值分割出蓝色线段。我也能够分割出明亮的黄色圆圈内的黄色线段使用阈值。最后,我正在使用带有houghlines函数和遮罩的hough变换删除线段的其余元素

有没有一种更优雅的方式来实现这一点,或者我是不是被困在使用这种方法的组合中了

谢谢

编辑:我发现hough变换只是从我的图像中删除单个像素,而不是整个黄线。我考虑放大检测到的像素并检查相似性,但我担心黄线与背景颜色太相似(它的位置可能会发生变化,以至于它无法完全跟踪黑色背景,现在正好结束了)。如有任何建议,将不胜感激

%% This block was intended to deal with another data 
set this function has to analyze, but it actually ended up removing my 
yellow circles as well, making a further threshold step unnecessary so far 

% Converts to a binary image containing almost exclusively lines and crosshairs
mask = im2bw(rgb_img, 0.8);

% Invert mask
mask = ~mask;

% Remove detected lines and crosshairs by setting to 0
rgb_img(repmat(~mask, [1, 1, 3])) = 0;

%% Removes blue targetting lines if present

% Define thresholds for RGB channel 3 based on histogram settings to remove
% blue lines

channel3Min = 0.000;
channel3Max = 0.478;

% Create mask based on chosen histogram thresholds
noBlue = (rgb_img(:,:,3) >= channel3Min ) & (rgb_img(:,:,3) <= channel3Max);

% Set background pixels where noBlue is false to zero.
rgb_img(repmat(~noBlue,[1 1 3])) = 0;

%% Removes any other targetting lines if present

imageGreyed = rgb2gray(rgb_img);

% Performs canny edge detection
BW = edge(imageGreyed, 'canny');

% Computes the hough transform
[H,theta,rho] = hough(BW);

% Finds the peaks in the hough matrix
P = houghpeaks(H,5,'threshold',ceil(0.3*max(H(:))));

% Finds any large lines present in the image
lines = houghlines(BW,theta,rho,P,'FillGap',5,'MinLength',100);

colEnd = [];
rowEnd = [];

for i = 1:length(lines)

    % Extracts line start and end points from houghlines output

    pointHold = lines(i).point1;
    colEnd = [colEnd pointHold(1)];
    rowEnd = [rowEnd pointHold(2)];

    pointHold = lines(i).point2;
    colEnd = [colEnd pointHold(1)];
    rowEnd = [rowEnd pointHold(2)];

    % Creates a line segment from the line endpoints using a simple linear regression
    fit = polyfit(colEnd, rowEnd, 1);

    % Creates index of "x" (column) values to be fed into regression
    colIndex = (colEnd(1):colEnd(2));

    rowIndex = [];

    % Obtains "y" (row) pixel values from regression

    for i = colIndex

        rowHold = fit(1) * i + fit(2);
        rowIndex = [rowIndex rowHold];

    end

    % Round regression output
    rowIndex = round(rowIndex);

    % Assemble coordinate matrix
    lineCoordinates = [colIndex; rowIndex]';

    rgbDim = size(rgb_img);

    % Create mask based on input image size
    yellowMask = ones(rgbDim(1), rgbDim(2));

    for i = 1:length(rowIndex)

        yellowMask(rowIndex(i), colIndex(i)) = 0;

    end

    % Remove the lines found by hough transform
    rgb_img(repmat(~yellowMask,[1 1 3])) = 0;

end 

end
%%此块用于处理其他数据
设置此函数必须进行分析,但实际上它最终删除了我的
黄色圆圈也是如此,因此目前还不需要进一步的门槛步骤
%转换为几乎完全包含线条和十字线的二进制图像
掩模=im2bw(rgb_img,0.8);
%反转掩模
面具=~面具;
%通过设置为0删除检测到的线和十字光标
rgb_img(repmat(~mask,[1,1,3]))=0;
%%删除蓝色目标线(如果存在)
%根据要删除的直方图设置定义RGB通道3的阈值
%蓝线
通道3min=0.000;
信道3max=0.478;
%基于选定的直方图阈值创建遮罩

noBlue=(rgb_img(:,:,3)>=channel3Min)和(rgb_img(:,:,3)

使用以下图像:

he = imread('HlQVN.jpg');
imshow(he)
cform = makecform('srgb2lab');
lab_he = applycform(he,cform);
ab = double(lab_he(:,:,2:3));
nrows = size(ab,1);
ncols = size(ab,2);
ab = reshape(ab,nrows*ncols,2);
nColors = 3;
% repeat the clustering 3 times to avoid local minima
[cluster_idx, cluster_center] = kmeans(ab,nColors,'distance','sqEuclidean', ...
                                      'Replicates',3);
pixel_labels = reshape(cluster_idx,nrows,ncols);
segmented_images = cell(1,3);
rgb_label = repmat(pixel_labels,[1 1 3]);

for k = 1:nColors
    color = he;
    color(rgb_label ~= k) = 0;
    segmented_images{k} = color;
end
imshow(segmented_images{1}), title('objects in cluster 1');

这已经很好地识别了蓝线。

我简要测试了上给出的示例

使用以下图像:

he = imread('HlQVN.jpg');
imshow(he)
cform = makecform('srgb2lab');
lab_he = applycform(he,cform);
ab = double(lab_he(:,:,2:3));
nrows = size(ab,1);
ncols = size(ab,2);
ab = reshape(ab,nrows*ncols,2);
nColors = 3;
% repeat the clustering 3 times to avoid local minima
[cluster_idx, cluster_center] = kmeans(ab,nColors,'distance','sqEuclidean', ...
                                      'Replicates',3);
pixel_labels = reshape(cluster_idx,nrows,ncols);
segmented_images = cell(1,3);
rgb_label = repmat(pixel_labels,[1 1 3]);

for k = 1:nColors
    color = he;
    color(rgb_label ~= k) = 0;
    segmented_images{k} = color;
end
imshow(segmented_images{1}), title('objects in cluster 1');

这已经很好地识别了蓝线。

这篇文章将不讨论问题的图像处理方面,而只是关注实现,并建议改进现有代码的方法。现在,代码在每个循环迭代中都有
polyfit
计算,我不确定是否可以矢量化。因此,rat让我们尝试将循环中的其余代码矢量化,希望这能为整个代码带来一些加速

1) 替换-

rowIndex=[]
for i = colIndex
    rowHold = fit(1) * i + fit(2)
    rowIndex = [rowIndex rowHold];    
end
yellowMask = ones(rgbDim(1), rgbDim(2));
for i = 1:length(rowIndex)
    yellowMask(rowIndex(i), colIndex(i)) = 0;
end
rgb_img(repmat(~yellowMask,[1 1 3])) = 0;
与-

rowIndex = fit(1)*colIndex + fit(2)
idx1 = (colIndex-1)*rgbDim(1) + rowIndex
rgb_img(bsxfun(@plus,idx1(:),[0:rgbDim(3)-1]*rgbDim(1)*rgbDim(2))) = 0;
2) 替换-

rowIndex=[]
for i = colIndex
    rowHold = fit(1) * i + fit(2)
    rowIndex = [rowIndex rowHold];    
end
yellowMask = ones(rgbDim(1), rgbDim(2));
for i = 1:length(rowIndex)
    yellowMask(rowIndex(i), colIndex(i)) = 0;
end
rgb_img(repmat(~yellowMask,[1 1 3])) = 0;
与-

rowIndex = fit(1)*colIndex + fit(2)
idx1 = (colIndex-1)*rgbDim(1) + rowIndex
rgb_img(bsxfun(@plus,idx1(:),[0:rgbDim(3)-1]*rgbDim(1)*rgbDim(2))) = 0;

这篇文章将不讨论问题的图像处理方面,而只关注实现,并提出改进现有代码的方法。现在,代码在每个循环迭代中都有
polyfit
计算,我不确定是否可以矢量化。因此,让我们尝试将循环中的其余代码矢量化,希望这能为整个代码带来一些加速。我想提出的更改在最内部的循环中分两步进行

1) 替换-

rowIndex=[]
for i = colIndex
    rowHold = fit(1) * i + fit(2)
    rowIndex = [rowIndex rowHold];    
end
yellowMask = ones(rgbDim(1), rgbDim(2));
for i = 1:length(rowIndex)
    yellowMask(rowIndex(i), colIndex(i)) = 0;
end
rgb_img(repmat(~yellowMask,[1 1 3])) = 0;
与-

rowIndex = fit(1)*colIndex + fit(2)
idx1 = (colIndex-1)*rgbDim(1) + rowIndex
rgb_img(bsxfun(@plus,idx1(:),[0:rgbDim(3)-1]*rgbDim(1)*rgbDim(2))) = 0;
2) 替换-

rowIndex=[]
for i = colIndex
    rowHold = fit(1) * i + fit(2)
    rowIndex = [rowIndex rowHold];    
end
yellowMask = ones(rgbDim(1), rgbDim(2));
for i = 1:length(rowIndex)
    yellowMask(rowIndex(i), colIndex(i)) = 0;
end
rgb_img(repmat(~yellowMask,[1 1 3])) = 0;
与-

rowIndex = fit(1)*colIndex + fit(2)
idx1 = (colIndex-1)*rgbDim(1) + rowIndex
rgb_img(bsxfun(@plus,idx1(:),[0:rgbDim(3)-1]*rgbDim(1)*rgbDim(2))) = 0;

事实证明,答案涉及将图像转换为实验室颜色空间并执行treshholding。这将分割出图像其余部分损失最小的线条。代码如下:

    % Convert RGB image to L*a*b color space for thresholding
    rgb_img = im2double(rgb_img);
    cform = makecform('srgb2lab', 'AdaptedWhitePoint', whitepoint('D65'));
    I = applycform(rgb_img,cform);

    % Define thresholds for channel 2 based on histogram settings
    channel2Min = -1.970;
    channel2Max = 48.061;

    % Create mask based on chosen histogram threshold
    BW = (I(:,:,2) <= channel2Min ) | (I(:,:,2) >= channel2Max);

    % Determines the eccentricity for regions of pixels; basically how line-like
    % (vals close to 1) or circular (vals close to 0) the region is
    rp = regionprops(BW, 'PixelIdxList', 'Eccentricity');

    % Selects for regions which are not line segments (areas which
    % may have been incorrectly thresholded out with the crosshairs)
    rp = rp([rp.Eccentricity] < 0.99); 

    % Removes the non-line segment regions from the mask
    BW(vertcat(rp.PixelIdxList)) = false;

    % Set background pixels where BW is false to zero.
    rgb_img(repmat(BW,[1 1 3])) = 0;
%将RGB图像转换为L*a*b颜色空间进行阈值化
rgb_img=im2double(rgb_img);
cform=makecform('srgb2lab','AdaptedWhitePoint',whitepoint('D65'));
I=应用形式(rgb_img,cform);
%根据直方图设置定义通道2的阈值
通道2min=-1.970;
信道2max=48.061;
%基于选定的直方图阈值创建遮罩
BW=(I(:,:,2)=信道最大值);
%确定像素区域的偏心率;基本上是什么样的
%(VAL接近1)或圆形(VAL接近0)区域为
rp=区域属性(BW、“像素idxlist”、“偏心率”);
%选择非直线段的区域(包含
%可能已使用十字光标错误地设置了阈值)
rp=rp([rp.偏心率]<0.99);
%从遮罩中删除非线段区域
BW(vertcat(rp.PixelIdxList))=假;
%将BW为false的背景像素设置为零。
rgb_img(repmat(BW,[13]))=0;

事实证明,答案涉及将图像转换为实验室颜色空间并执行treshholding。这将分割出图像其余部分损失最小的线条。代码如下:

    % Convert RGB image to L*a*b color space for thresholding
    rgb_img = im2double(rgb_img);
    cform = makecform('srgb2lab', 'AdaptedWhitePoint', whitepoint('D65'));
    I = applycform(rgb_img,cform);

    % Define thresholds for channel 2 based on histogram settings
    channel2Min = -1.970;
    channel2Max = 48.061;

    % Create mask based on chosen histogram threshold
    BW = (I(:,:,2) <= channel2Min ) | (I(:,:,2) >= channel2Max);

    % Determines the eccentricity for regions of pixels; basically how line-like
    % (vals close to 1) or circular (vals close to 0) the region is
    rp = regionprops(BW, 'PixelIdxList', 'Eccentricity');

    % Selects for regions which are not line segments (areas which
    % may have been incorrectly thresholded out with the crosshairs)
    rp = rp([rp.Eccentricity] < 0.99); 

    % Removes the non-line segment regions from the mask
    BW(vertcat(rp.PixelIdxList)) = false;

    % Set background pixels where BW is false to zero.
    rgb_img(repmat(BW,[1 1 3])) = 0;
%将RGB图像转换为L*a*b颜色空间进行阈值化
rgb_img=im2double(rgb_img);
cform=makecform('srgb2lab','AdaptedWhitePoint',whitepoint('D65'));
I=应用形式(rgb_img,cform);
%根据直方图设置定义通道2的阈值
通道2min=-1.970;
信道2max=48.061;
%基于选定的直方图阈值创建遮罩
BW=(I(:,:,2)=信道最大值);
%确定像素区域的偏心率;基本上是什么样的
%(VAL接近1)或圆形(VAL接近0)区域为
rp=区域属性(BW、“像素idxlist”、“偏心率”);
%选择非直线段的区域(包含
%可能已使用十字光标错误地设置了阈值)
rp=rp([rp.偏心率]<0.99);
%从遮罩中删除非线段区域
BW(vertcat(rp.PixelIdxList))=假;
%将BW为false的背景像素设置为零。
rgb_img(repmat(BW,[13]))=0;

我们可以看到您为解决此问题而编写的代码吗?添加的代码和更新我们可以看到您为解决此问题而编写的代码吗?添加的代码和更新虽然这非常有趣且信息丰富,但事实证明,仅转换到实验室颜色空间是我需要的突破。我很容易就把每件事都搞定了