Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/cplusplus/151.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
C++ 光流忽略稀疏运动_C++_Opencv_Motion Detection_Opticalflow - Fatal编程技术网

C++ 光流忽略稀疏运动

C++ 光流忽略稀疏运动,c++,opencv,motion-detection,opticalflow,C++,Opencv,Motion Detection,Opticalflow,我们实际上正在进行一个图像分析项目,我们需要识别场景中消失/出现的对象。这里有两张照片,一张是在外科医生采取行动前拍摄的,另一张是在手术后拍摄的 之前: 之后: 首先,我们刚刚计算了两幅图像之间的差异,结果如下(请注意,我在结果Mat中添加了128,只是为了得到更好的图像): (之后-之前)+128 目标是检测杯子(红色箭头)已从场景中消失,注射器(黑色箭头)已进入场景,换句话说,我们应仅检测与场景中留下/进入的对象相对应的区域。此外,很明显,场景左上角的对象从其初始位置偏移了一点。我考虑了

我们实际上正在进行一个图像分析项目,我们需要识别场景中消失/出现的对象。这里有两张照片,一张是在外科医生采取行动前拍摄的,另一张是在手术后拍摄的

之前: 之后:

首先,我们刚刚计算了两幅图像之间的差异,结果如下(请注意,我在结果
Mat
中添加了128,只是为了得到更好的图像):

(之后-之前)+128

目标是检测杯子(红色箭头)已从场景中消失,注射器(黑色箭头)已进入场景,换句话说,我们应仅检测与场景中留下/进入的对象相对应的区域。此外,很明显,场景左上角的对象从其初始位置偏移了一点。我考虑了光流,所以我使用了
OpenCV C++
来计算Farneback's,以确定它是否足以满足我们的情况,下面是我们得到的结果,然后是我们编写的代码:

流量:

void drawOptFlowMap(常量Mat和flow、Mat和cflowmap、int-step、double、常量标量和颜色)
{

cout你可以尝试一种双管齐下的方法-使用图像差分方法在检测进入和退出场景的对象时非常有用,只要对象的颜色与背景的颜色不同。让我印象深刻的是,如果你能在使用该方法之前移除移动的对象,效果会大大提高

有一种很棒的OpenCV对象检测方法,它可以在图像中找到感兴趣的点来检测对象的平移-

1将图像与OpenCV代码进行比较,突出显示两幅图像中的移动对象

2在检测到的物体中,以相同的像素集(或类似的东西)为背景对另一张图片进行着色,以减少由运动图像引起的图像差异

3找出图像差异,现在应该有较大的主要对象和较小的人工制品遗留下来的运动图像

4图像差异中检测到的特定大小对象的阈值

5编制一份可能的候选人名单


物体跟踪还有其他选择,因此可能会有你更喜欢的代码,但我认为这个过程应该适合你所做的事情。

假设这里的目标是识别出现/消失物体的区域,但不是出现在两张图片中的区域,而是移动的位置

光流应该是一个很好的方法,正如您已经做过的。但是问题是如何评估结果。与像素到像素的差异(显示对旋转/缩放变化没有公差)相反,您可以进行特征匹配(SIFT等)

以下是我以前从您的图像中获得的良好功能

GoodFeaturesToTrackDetector detector;
vector<KeyPoint> keyPoints;
vector<Point2f> kpBefore, kpAfter;
detector.detect(imageBefore, keyPoints);
跟踪检测器的良好特性;
矢量关键点;
向量kpBefore,kpAfter;
检测器。检测(图像、关键点);

您可以使用稀疏流,而不是密集光流,只跟踪特征

vector<uchar> featuresFound;
vector<float> err;
calcOpticalFlowPyrLK(imageBeforeGray, imageAfterGray, keyPointsBefore, keyPointsAfter, featuresFound, err, Size(PATCH_SIZE , PATCH_SIZE ));
矢量特征查找;
矢量误差;
calcOpticalFlowPyrLK(imageBeforeGray、imageAfterGray、keyPointsBefore、keyPointsAfter、featuresFound、err、Size(面片大小、面片大小));
输出包括FeaturesFound和Error值。我只是在这里使用了一个阈值来区分移动的功能和不匹配的消失的功能

vector<KeyPoint> kpNotMatched;
for (int i = 0; i < kpBefore.size(); i++) {
    if (!featuresFound[i] || err[i] > ERROR_THRESHOLD) {
        kpNotMatched.push_back(KeyPoint(kpBefore[i], 1));
    }
}
Mat output;
drawKeypoints(imageBefore, kpNotMatched, output, Scalar(0, 0, 255));  
向量kpNotMatched;
对于(int i=0;iERROR\u阈值){
kpNotMatched.push_back(关键点(kpBefore[i],1));
}
}
垫输出;
drawKeypoints(imageBefore、kpNotMatched、输出、标量(0、0、255));

剩下的不正确匹配的特征可以被过滤掉。在这里,我使用简单的均值过滤加上阈值来获得新出现区域的遮罩

Mat mask = Mat::zeros(imageBefore.rows, imageBefore.cols, CV_8UC1);
for (int i = 0; i < kpNotMatched.size(); i++) {
    mask.at<uchar>(kpNotMatched[i].pt) = 255;
}
blur(mask, mask, Size(BLUR_SIZE, BLUR_SIZE));
threshold(mask, mask, MASK_THRESHOLD, 255, THRESH_BINARY);
Mat mask=Mat::零(imageBefore.rows、imageBefore.cols、CV_8UC1);
对于(int i=0;i

然后找到它的凸包,在原始图像中显示该区域(黄色)

矢量轮廓;
向量层次;
findContours(遮罩、轮廓、层次、CV_RETR_树、CV_链_近似_简单、点(0,0));
向量壳(contours.size());
对于(int i=0;i

只需以相反的方式(从imageAfter到imageBefore进行匹配)即可获得显示的区域。

以下是我尝试的内容

  • 检测发生变化的区域。为此,我使用了简单的帧差分、阈值、形态学操作和凸包
  • 在两幅图像中找到这些区域的特征点,看看它们是否匹配。区域中的良好匹配表示它没有发生显著变化。不匹配表示这两个区域现在不同。为此,我使用了BOW和Bhattacharyya距离
参数可能需要调整。我使用的值仅适用于两个示例图像。作为特征检测器/描述符,我使用了SIFT(非免费)。您可以尝试其他检测器和描述符

差异图像:

区域:

更改(红色:插入/删除,黄色:稀疏运动):

//对于非自由模块,请进行筛选/浏览
cv::initModule_nonfree();
Mat im1=imread(“1.png”);
Mat im2=imread(“2.png”);
//下采样
/*吡咯烷酮(im1,im1);
吡咯烷酮(im2,im2)*/
Mat disp=im1.clone()*.5+im2.clone()*.5;
Mat区域=Mat::零(im1.rows、im1.cols、CV_8U);
//灰度
材料gr1、gr2;
CVT颜色(im1、gr1、CV_BGr2灰色);
CVT颜色(i)
vector<KeyPoint> kpNotMatched;
for (int i = 0; i < kpBefore.size(); i++) {
    if (!featuresFound[i] || err[i] > ERROR_THRESHOLD) {
        kpNotMatched.push_back(KeyPoint(kpBefore[i], 1));
    }
}
Mat output;
drawKeypoints(imageBefore, kpNotMatched, output, Scalar(0, 0, 255));  
Mat mask = Mat::zeros(imageBefore.rows, imageBefore.cols, CV_8UC1);
for (int i = 0; i < kpNotMatched.size(); i++) {
    mask.at<uchar>(kpNotMatched[i].pt) = 255;
}
blur(mask, mask, Size(BLUR_SIZE, BLUR_SIZE));
threshold(mask, mask, MASK_THRESHOLD, 255, THRESH_BINARY);
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( mask, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );

vector<vector<Point> >hull( contours.size() );
for( int i = 0; i < contours.size(); i++ ) {
    convexHull(Mat(contours[i]), hull[i], false);
}
for( int i = 0; i < contours.size(); i++ ) {
    drawContours( output, hull, i, Scalar(0, 255, 255), 3, 8, vector<Vec4i>(), 0, Point() );
}
// for non-free modules SIFT/SURF
cv::initModule_nonfree();

Mat im1 = imread("1.png");
Mat im2 = imread("2.png");

// downsample
/*pyrDown(im1, im1);
pyrDown(im2, im2);*/

Mat disp = im1.clone() * .5 + im2.clone() * .5;
Mat regions = Mat::zeros(im1.rows, im1.cols, CV_8U);

// gray scale
Mat gr1, gr2;
cvtColor(im1, gr1, CV_BGR2GRAY);
cvtColor(im2, gr2, CV_BGR2GRAY);
// simple frame differencing
Mat diff;
absdiff(gr1, gr2, diff);
// threshold the difference to obtain the regions having a change
Mat bw;
adaptiveThreshold(diff, bw, 255, CV_ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY_INV, 15, 5);
// some post processing
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(bw, bw, MORPH_CLOSE, kernel, Point(-1, -1), 4);
// find contours in the change image
Mat cont = bw.clone();
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(cont, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, Point(0, 0));
// feature detector, descriptor and matcher
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("SIFT");
Ptr<DescriptorExtractor> descExtractor = DescriptorExtractor::create("SIFT");
Ptr<DescriptorMatcher> descMatcher = DescriptorMatcher::create("FlannBased");

if( featureDetector.empty() || descExtractor.empty() || descMatcher.empty() )
{
    cout << "featureDetector or descExtractor or descMatcher was not created" << endl;
    exit(0);
}
// BOW
Ptr<BOWImgDescriptorExtractor> bowExtractor = new BOWImgDescriptorExtractor(descExtractor, descMatcher);

int vocabSize = 10;
TermCriteria terminate_criterion;
terminate_criterion.epsilon = FLT_EPSILON;
BOWKMeansTrainer bowTrainer( vocabSize, terminate_criterion, 3, KMEANS_PP_CENTERS );

Mat mask(bw.rows, bw.cols, CV_8U);
for(size_t j = 0; j < contours.size(); j++)
{
    // discard regions that a below a specific threshold
    Rect rect = boundingRect(contours[j]);
    if ((double)(rect.width * rect.height) / (bw.rows * bw.cols) < .01)
    {
        continue; // skip this region as it's too small
    }
    // prepare a mask for each region
    mask.setTo(0);
    vector<Point> hull;
    convexHull(contours[j], hull);
    fillConvexPoly(mask, hull, Scalar::all(255), 8, 0);

    fillConvexPoly(regions, hull, Scalar::all(255), 8, 0);

    // extract keypoints from the region
    vector<KeyPoint> im1Keypoints, im2Keypoints;
    featureDetector->detect(im1, im1Keypoints, mask);
    featureDetector->detect(im2, im2Keypoints, mask);
    // get their descriptors
    Mat im1Descriptors, im2Descriptors;
    descExtractor->compute(im1, im1Keypoints, im1Descriptors);
    descExtractor->compute(im2, im2Keypoints, im2Descriptors);

    if ((0 == im1Keypoints.size()) || (0 == im2Keypoints.size()))
    {
        // mark this contour as object arrival/removal region
        drawContours(disp, contours, j, Scalar(0, 0, 255), 2);
        continue;
    }

    // bag-of-visual-words
    Mat vocabulary = bowTrainer.cluster(im1Descriptors);
    bowExtractor->setVocabulary( vocabulary );
    // get the distribution of visual words in the region for both images
    vector<vector<int>> idx1, idx2;
    bowExtractor->compute(im1, im1Keypoints, im1Descriptors, &idx1);
    bowExtractor->compute(im2, im2Keypoints, im2Descriptors, &idx2);
    // compare the distributions
    Mat hist1 = Mat::zeros(vocabSize, 1, CV_32F);
    Mat hist2 = Mat::zeros(vocabSize, 1, CV_32F);

    for (int i = 0; i < vocabSize; i++)
    {
        hist1.at<float>(i) = (float)idx1[i].size();
        hist2.at<float>(i) = (float)idx2[i].size();
    }
    normalize(hist1, hist1);
    normalize(hist2, hist2);
    double comp = compareHist(hist1, hist2, CV_COMP_BHATTACHARYYA);

    cout << comp << endl;
    // low BHATTACHARYYA distance means a good match of features in the two regions
    if ( comp < .2 )
    {
        // mark this contour as a region having sparse motion
        drawContours(disp, contours, j, Scalar(0, 255, 255), 2);
    }
    else
    {
        // mark this contour as object arrival/removal region
        drawContours(disp, contours, j, Scalar(0, 0, 255), 2);
    }
}