Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 输出中的鲁棒局部光流(RLOF)坐标不正确(doc SparseRLOFOpticalFlow)_Python_Opencv_Opticalflow - Fatal编程技术网

Python 输出中的鲁棒局部光流(RLOF)坐标不正确(doc SparseRLOFOpticalFlow)

Python 输出中的鲁棒局部光流(RLOF)坐标不正确(doc SparseRLOFOpticalFlow),python,opencv,opticalflow,Python,Opencv,Opticalflow,我尝试使用鲁棒局部光流(RLOF)函数()的opencv库。为此,我使用opencv的FAST函数提取视频(帧)的不同特征点。我的视频大小为640*480,每秒30帧。然后,在整个视频中,我在第n帧和第n-1帧的主函数中使用了FAST函数。然后,我使用RLOF上的特征点检测与快速功能对图像n et图像n-1功能与不同的参数,我可以找到通过阅读该算法的文件 此外,在每一帧之间,检测到的特征点数量并不完全相同,因此我认为这就是为什么选项setUseInitialFlow(True),会导致openc

我尝试使用鲁棒局部光流(RLOF)函数()的opencv库。为此,我使用opencv的FAST函数提取视频(帧)的不同特征点。我的视频大小为640*480,每秒30帧。然后,在整个视频中,我在第n帧和第n-1帧的主函数中使用了FAST函数。然后,我使用RLOF上的特征点检测与快速功能对图像n et图像n-1功能与不同的参数,我可以找到通过阅读该算法的文件

此外,在每一帧之间,检测到的特征点数量并不完全相同,因此我认为这就是为什么选项
setUseInitialFlow(True)
,会导致opencv
cv2错误。错误:OpenCV(4.4.0)/tmp/pip-req-build-b_zf9wbm/OpenCV_contrib/modules/optflow/src/rloffflow.cpp:372:错误:(.215:断言失败)nextPtsMat.checkVector(2,CV_32F,true)=函数“calc”中的npoints

我的代码运行正常,我的变量keypoints_old将其作为FAST的坐标(超过500个):

但是,我用RLOF函数找到的坐标,用于找到下图中两幅图像之间的运动量,实际上并不一致。在opencv文档中,我应该从中获取点坐标。也许,我认为,这些坐标代表了运动的数量,但是,阅读文档,它不应该是这样的

keypoints_new = [[-1.0420260e+20  4.5582838e-41]
 [-1.0420260e+20  4.5582838e-41]
 [ 3.5204788e-38  0.0000000e+00]
 ...
 [-3.5464977e-24  4.5582838e-41]
 [-3.5465167e-24  4.5582838e-41]
 [-3.5465356e-24  4.5582838e-41]] 
我的问题是,如何正确使用这个函数?为什么第二幅图像的发现点坐标与前一幅图像(第一幅)的坐标不符合逻辑?我的代码怎么了?谢谢你的回答

我的职能:

import numpy as np
import cv2 as cv
import os
import argparse
import re
import os
import argparse

def FAST(img):

    # Initiate FAST object with default values

    fast = cv.FastFeatureDetector_create()
    keypoints = fast.detect(img,None)
    pts = cv.KeyPoint_convert(keypoints)
    return pts

def RLOF(old_frame, new_frame, keypoints_old, keypoints_new):

    # Parameters for RLOF

    instance = cv.optflow.RLOFOpticalFlowParameter_create()
    # instance.setUseInitialFlow(True)
    instance.setMaxIteration(30)
    instance.setNormSigma0(3.2)
    instance.setNormSigma1(7.0)
    instance.setLargeWinSize(21)
    instance.setSmallWinSize(9)
    instance.setMaxLevel(9)
    instance.setMinEigenValue(0.0001)
    instance.setCrossSegmentationThreshold(25)
    instance.setGlobalMotionRansacThreshold(10.0)
    instance.setUseIlluminationModel(True)
    instance.setUseGlobalMotionPrior(False)

    keypoints_new, st, err = cv.optflow.calcOpticalFlowSparseRLOF(
        old_frame,
        new_frame,
        keypoints_old, None, rlofParam = instance,
        forwardBackwardThreshold = 0)

    return keypoints_new

def __main__():
    parser = argparse.ArgumentParser(description='Process some video.')
    parser.add_argument('file_path', type=str,
                    help='Video_file path')
    args = parser.parse_args()
    video_file = args.file_path
    cap = cv.VideoCapture(video_file)
    fps = cap.get(cv.CAP_PROP_FPS)
    frame_count = int(cap.get(cv.CAP_PROP_FRAME_COUNT))
    ret, new_frame = cap.read() # 0
    ret, new_frame = cap.read() # 1
    p1 = FAST(new_frame)
    print('frame_count = {}'.format(frame_count))
    while cap.isOpened():
        for i in range(frame_count):
            old_frame = new_frame # 1 2 3 4
            p0 = p1
            ret, new_frame = cap.read() # 2 3 4 5 ..
            if ret:
                p1 = FAST(new_frame)
                RLOF(old_frame, new_frame, p0, p1, i)
            else:
                cap.release()
                break
__main__()

您有用于跟踪的图像的示例吗?
import numpy as np
import cv2 as cv
import os
import argparse
import re
import os
import argparse

def FAST(img):

    # Initiate FAST object with default values

    fast = cv.FastFeatureDetector_create()
    keypoints = fast.detect(img,None)
    pts = cv.KeyPoint_convert(keypoints)
    return pts

def RLOF(old_frame, new_frame, keypoints_old, keypoints_new):

    # Parameters for RLOF

    instance = cv.optflow.RLOFOpticalFlowParameter_create()
    # instance.setUseInitialFlow(True)
    instance.setMaxIteration(30)
    instance.setNormSigma0(3.2)
    instance.setNormSigma1(7.0)
    instance.setLargeWinSize(21)
    instance.setSmallWinSize(9)
    instance.setMaxLevel(9)
    instance.setMinEigenValue(0.0001)
    instance.setCrossSegmentationThreshold(25)
    instance.setGlobalMotionRansacThreshold(10.0)
    instance.setUseIlluminationModel(True)
    instance.setUseGlobalMotionPrior(False)

    keypoints_new, st, err = cv.optflow.calcOpticalFlowSparseRLOF(
        old_frame,
        new_frame,
        keypoints_old, None, rlofParam = instance,
        forwardBackwardThreshold = 0)

    return keypoints_new

def __main__():
    parser = argparse.ArgumentParser(description='Process some video.')
    parser.add_argument('file_path', type=str,
                    help='Video_file path')
    args = parser.parse_args()
    video_file = args.file_path
    cap = cv.VideoCapture(video_file)
    fps = cap.get(cv.CAP_PROP_FPS)
    frame_count = int(cap.get(cv.CAP_PROP_FRAME_COUNT))
    ret, new_frame = cap.read() # 0
    ret, new_frame = cap.read() # 1
    p1 = FAST(new_frame)
    print('frame_count = {}'.format(frame_count))
    while cap.isOpened():
        for i in range(frame_count):
            old_frame = new_frame # 1 2 3 4
            p0 = p1
            ret, new_frame = cap.read() # 2 3 4 5 ..
            if ret:
                p1 = FAST(new_frame)
                RLOF(old_frame, new_frame, p0, p1, i)
            else:
                cap.release()
                break
__main__()