OpenCV 3.0中的活动轮廓模型 我尝试用OpenCV 3在C++中实现主动轮廓模型算法。 该算法基于我为MatLab编写的脚本,并没有按预期工作。 这两幅图像显示了两种算法运行的结果

OpenCV 3.0中的活动轮廓模型 我尝试用OpenCV 3在C++中实现主动轮廓模型算法。 该算法基于我为MatLab编写的脚本,并没有按预期工作。 这两幅图像显示了两种算法运行的结果,c++,matlab,opencv,contour,C++,Matlab,Opencv,Contour,MatLab脚本: 还有OpenCV: 在这两个例子中,我对所有ACM参数使用了相同的值,所以它们应该返回相同的东西,白色的圆形轮廓。 我怀疑问题在于我的图像能量函数,因为opencv和matlab中的梯度运算是不一样的。图像能量的matlab脚本为: function [Eext] = get_eext(wl, we, wt, image) %External Energy [row,col] = size(image); eline = image; %eline is simply

MatLab脚本:

还有OpenCV:

在这两个例子中,我对所有ACM参数使用了相同的值,所以它们应该返回相同的东西,白色的圆形轮廓。 我怀疑问题在于我的图像能量函数,因为opencv和matlab中的梯度运算是不一样的。图像能量的matlab脚本为:

function [Eext] = get_eext(wl, we, wt, image)

%External Energy
[row,col] = size(image);
eline = image; %eline is simply the image intensities

[grady,gradx] = gradient(image);
eedge = -1 *(gradx .* gradx + grady .* grady);



%masks for taking various derivatives
m1 = [-1 1];
m2 = [-1;1];
m3 = [1 -2 1];
m4 = [1;-2;1];
m5 = [1 -1;-1 1];

cx = conv2(image,m1,'same');
cy = conv2(image,m2,'same');
cxx = conv2(image,m3,'same');
cyy = conv2(image,m4,'same');
cxy = conv2(image,m5,'same');

eterm = zeros(row, col);

for i = 1:row;
    for j= 1:col;
        % eterm as deined in Kass et al Snakes paper
        eterm(i,j) = (cyy(i,j)*cx(i,j)*cx(i,j) -2 *cxy(i,j)*cx(i,j)...
            *cy(i,j) + cxx(i,j)*cy(i,j)*cy(i,j))/((1+cx(i,j)*cx(i,j)...
            + cy(i,j)*cy(i,j))^1.5);
    end;
end;

Eext = (wl*eline + we*eedge + wt*eterm);
在C++中,我的函数是这样的:

Mat get_eext(float wl, float we, float wt, Mat image){

Mat eline, gradx, grady, img_gray, eedge;

//bitdepth defined as CV_32F
image.convertTo(img_gray, bitdepth);

//Convolution Kernels
Mat m1, m2, m3, m4, m5;
m1 = (Mat_<float>(1, 2) << -1, 1);
m2 = (Mat_<float>(2, 1) << -1, 1);
m3 = (Mat_<float>(1, 3) << 1, -2, 1);
m4 = (Mat_<float>(3, 1) << 1, -2, 1);
m5 = (Mat_<float>(2, 2) << 1, -1, -1, 1);

//cvtColor(image, img_gray, CV_BGR2GRAY); <- Not required since image already in grayscale
img_gray.copyTo(eline);

Mat kernelx = (Mat_<float>(1, 3) << -0.5, 0, 0.5);
Mat kernely = (Mat_<float>(3, 1) << -0.5, 0, 0.5);

filter2D(img_gray, gradx, -1, kernelx);
filter2D(img_gray, grady, -1, kernely);

//Edge Energy
eedge = -1 * (gradx.mul(gradx) + grady.mul(grady));

//Termination Energy Convolution
Mat cx, cy, cxx, cyy, cxy, eterm, cxm1, den, cxcx, cxcxm1, cxcxcy, cxcycxy, cycycxx;
filter2D(img_gray, cx, bitdepth, m1);
filter2D(img_gray, cy, bitdepth, m2);
filter2D(img_gray, cxx, bitdepth, m3);
filter2D(img_gray, cyy, bitdepth, m4);
filter2D(img_gray, cxy, bitdepth, m5);

//element wise operations to find Eterm
cxcx = cx.mul(cx);
cxcx.convertTo(cxcxm1, -1, 1, 1);
den = cxcxm1 + cy.mul(cy);
cv::pow(den, 1.5, den);
cxcxcy = cxcx.mul(cy);
cxcycxy = cx.mul(cy);
cxcycxy = cxcycxy.mul(cxy);
cycycxx = cy.mul(cy);
cycycxx = cycycxx.mul(cxx);
eterm = (cxcxcy - 2 * cxcycxy + cycycxx);
cv::divide(eterm,den,eterm,-1);

//Image energy
Mat eext;
eext = wl*eline + we*eedge + wt*eterm;
return eext;}
Mat get\u eext(浮点wl、浮点we、浮点wt、Mat图像){
Mat eline、gradx、grady、img_gray、eedge;
//比特深度定义为CV_32F
图像转换(图像灰度,比特深度);
//卷积核
材料m1、m2、m3、m4、m5;

m1=(Mat_u1;(1,2)正如David Doria所问,这里是函数get_eext经过一些修改后的最终版本。这个版本对我来说很好

Mat config_eext(float wl, float we, float wt, Mat image)
{
Mat eline, gradx, grady, img_gray, eedge;

//bitdepth defined as CV_32F
image.convertTo(img_gray, bitdepth);

//Convolution Kernels
Mat m1, m2, m3, m4, m5;
m1 = (Mat_<float>(1, 2) << 1, -1);
m2 = (Mat_<float>(2, 1) << 1, -1);
m3 = (Mat_<float>(1, 3) << 1, -2, 1);
m4 = (Mat_<float>(3, 1) << 1, -2, 1);
m5 = (Mat_<float>(2, 2) << 1, -1, -1, 1);

img_gray.copyTo(eline);

//Kernels de gradiente
Mat kernelx = (Mat_<float>(1, 3) << -1, 0, 1);
Mat kernely = (Mat_<float>(3, 1) << -1, 0, 1);

//Gradiente em x e em y
filter2D(img_gray, gradx, -1, kernelx);
filter2D(img_gray, grady, -1, kernely);

//Edge Energy como definido por Kass
eedge = -1 * (gradx.mul(gradx) + grady.mul(grady));

//Termination Energy Convolution
Mat cx, cy, cxx, cyy, cxy, eterm(img_gray.rows, img_gray.cols, bitdepth), cxm1, den, cxcx, cxcxm1, cxcxcy, cxcycxy, cycycxx;
filter2D(img_gray, cx, bitdepth, m1);
filter2D(img_gray, cy, bitdepth, m2);
filter2D(img_gray, cxx, bitdepth, m3);
filter2D(img_gray, cyy, bitdepth, m4);
filter2D(img_gray, cxy, bitdepth, m5);

//element wise operations to find Eterm
cxcx = cx.mul(cx);
cxcx.convertTo(cxcxm1, -1, 1, 1);
den = cxcxm1 + cy.mul(cy);
cv::pow(den, 1.5, den);
cxcxcy = cxcx.mul(cy);
cxcycxy = cx.mul(cy);
cxcycxy = cxcycxy.mul(cxy);
cycycxx = cy.mul(cy);
cycycxx = cycycxx.mul(cxx);
eterm = (cxcxcy - 2 * cxcycxy + cycycxx);
cv::divide(eterm, den, eterm, -1);

//Image energy
Mat eext;
eext = wl*eline + we*eedge + wt*eterm;
return eext;
}
Mat config\u eext(浮点wl、浮点we、浮点wt、Mat图像)
{
Mat eline、gradx、grady、img_gray、eedge;
//比特深度定义为CV_32F
图像转换(图像灰度,比特深度);
//卷积核
材料m1、m2、m3、m4、m5;

m1=(材料(1,2)您是否将梯度图像与matlab梯度图像进行了比较?
filter2D
在OpenCV中执行相关。在matlab中,
conv2
执行卷积。不同之处在于,对于
filter2D
,matlab执行此操作时,内核没有180度旋转。如果您希望在
filter2D和OpenCV,你需要在OpenCV中将内核旋转180度。关于这一点,Mika是正确的,因为你应该比较你的渐变图像。它们可能不一样。
imgradient
使用Sobel内核进行渐变,因此确保你的内核在OpenCV中是正确的。渐变图像确实不同旋转180度你的意思是使用[1-1]而不是[-1]?@Andrei很抱歉这么晚才回复您。是的,这是正确的。在OpenCV中,您可以通过先换位,然后沿列翻转来实现180度旋转,因此使用
cv::transpose
,然后使用
cv::flip
,将
flipCode
设置为0。您好。感谢您的发布,这真的很有帮助。我有一个问题要问她e、 整个算法是你自己实现的吗?我在OpenCV和Matlab中看到活动轮廓的参数是不同的。你是如何将这些参数从Matlab对应到OpenCV的?谢谢!