如何将检测到的视频对象存储在matlab中每帧的文件夹中?
我想将检测到的对象存储在黄色框中,并将每个对象及其标签名称存储在单独的文件中,如文件夹1中的对象1文件夹2中的对象2。我只想将检测到的对象保存在帧中 图片:如何将检测到的视频对象存储在matlab中每帧的文件夹中?,matlab,video,tracking,motion,Matlab,Video,Tracking,Motion,我想将检测到的对象存储在黄色框中,并将每个对象及其标签名称存储在单独的文件中,如文件夹1中的对象1文件夹2中的对象2。我只想将检测到的对象保存在帧中 图片: 函数[质心、B盒、遮罩]=检测对象(帧) %检测前景。 掩模=目标探测器步进(帧); %应用形态学操作以消除噪声并填充孔。 mask=imopen(mask,strel(‘矩形’,[3,3]); mask=imclose(mask,strel('rectangle',[15,15]); 遮罩=填充(遮罩“孔”); %执行blob分析以查找连
函数[质心、B盒、遮罩]=检测对象(帧)
%检测前景。
掩模=目标探测器步进(帧);
%应用形态学操作以消除噪声并填充孔。
mask=imopen(mask,strel(‘矩形’,[3,3]);
mask=imclose(mask,strel('rectangle',[15,15]);
遮罩=填充(遮罩“孔”);
%执行blob分析以查找连接的组件。
[~,质心,B盒]=obj.blobAnalyser.step(遮罩);
结束
%%预测现有轨道的新位置
%使用Kalman滤波器预测图像中每个轨迹的质心
%当前帧,并相应地更新其边界框。
函数predictNewLocationsOfTracks()
对于i=1:长度(轨道)
bbox=轨道(i)。bbox;
%预测轨道的当前位置。
预测质心=预测(轨迹(i).Kalman滤波器);
%移动边界框,使其中心位于
%预测的位置。
预测质心=int32(预测质心)-bbox(3:4)/2;
轨迹(i).bbox=[预测质心,bbox(3:4)];
结束
结束
%%为轨迹分配检测
%无法将当前帧中的对象检测指定给现有轨迹
%通过最小化成本来完成。成本定义为负
%对应于轨迹的检测的对数可能性。
%
%该算法包括两个步骤:
%
%步骤1:使用
%|vision.KalmanFilter|System object(TM)的| distance |方法。这个
%成本考虑了预测结果之间的欧几里德距离
%轨迹的质心和检测的质心。它还包括
%预测的置信度,由卡尔曼滤波器维持
%过滤器。结果存储在MxN矩阵中,其中M是
%跟踪,N是检测数。
%
%第2步:使用
%| assignDetectionsToTracks |函数。该函数承担成本
%矩阵和不向轨道分配任何检测的成本。
%
%不将检测分配给轨迹的成本值取决于
%函数的|距离|方法返回的值的范围
%| vision.KalmanFilter |。必须通过实验调整该值。背景
%过低会增加创建新轨道的可能性,并且可能会
%导致磁道碎片。将其设置得太高可能会导致单个
%对应于一系列单独移动对象的轨迹。
%
%| assignDetectionsToTracks |函数使用Munkres版本的
%匈牙利算法,用于计算使总分配数最小化的分配
%成本。它返回一个M x 2矩阵,其中包含
%在其两列中指定轨迹和检测。它还返回
%未分配的轨迹和检测指标。
函数[assignments,unassignedTracks,unassignedDetections]=。。。
detectionToTrackAssignment()
nTracks=长度(轨道);
n检测=尺寸(质心,1);
%计算将每个检测分配给每个轨迹的成本。
成本=零(nTrack,nDetection);
对于i=1:nTracks
成本(i,:)=距离(轨迹(i).卡尔曼滤波器,质心);
结束
%解决分配问题。
不分配成本=20;
[assignments,UnassignedTrack,unassignedDetections]=。。。
分配检测跟踪(成本、不分配成本);
结束
%%更新指定的轨迹
%| updateassignedtrack |函数使用
%相应的检测。它称之为|正确的|方法
%| vision.KalmanFilter |纠正位置估计。其次,它存储
%新的边界框,并增加轨迹的年龄和总长度
%可见计数为1。最后,该函数将不可见计数设置为0。
函数updateAssignedTracks()
numAssignedTracks=大小(分配,1);
对于i=1:numassignedtrack
trackIdx=分配(i,1);
检测dx=分配(i,2);
质心=质心(检测dx,:);
bbox=bboxes(检测dx,:);
%更正对对象位置的估计
%使用新的检测方法。
正确(轨迹(trackIdx).kalmanFilter,形心);
%将预测的边界框替换为检测到的边界框
%边界框。
轨道(trackIdx).bbox=bbox;
%更新曲目的年龄。
轨道(trackIdx)。年龄=轨道(trackIdx)。年龄+1;
%更新可见性。
曲目(trackIdx).totalVisibleCount=。。。
轨道(trackIdx)。总可视计数+1;
轨迹(轨迹IDX)。连续可视计数=0;
结束
结束
%%更新未分配的轨迹
%将每个未指定的轨迹标记为不可见,并将其年龄增加1。
函数updateUnassignedTracks()
对于i=1:长度(未分配的机架)
ind=未分配的机架(i);
轨道(ind).年龄=轨道(ind).年龄+1;
轨迹(ind)。连续可视计数=。。。
轨道(ind)。连续可视计数+1;
结束
结束
%%删除丢失的曲目
%| deleteLostTracks |函数删除不可见的轨迹
%对于太多的连续帧。它还删除最近创建的轨迹
%这在太多帧中都是不可见的。
函数deleteLostTracks()
如果是空的(轨道)
返回;
结束
不可见的fortolong=20;
年龄阈值=8;
%计算t的分数
function [centroids, bboxes, mask] = detectObjects(frame)
% Detect foreground.
mask = obj.detector.step(frame);
% Apply morphological operations to remove noise and fill in holes.
mask = imopen(mask, strel('rectangle', [3,3]));
mask = imclose(mask, strel('rectangle', [15, 15]));
mask = imfill(mask, 'holes');
% Perform blob analysis to find connected components.
[~, centroids, bboxes] = obj.blobAnalyser.step(mask);
end
%% Predict New Locations of Existing Tracks
% Use the Kalman filter to predict the centroid of each track in the
% current frame, and update its bounding box accordingly.
function predictNewLocationsOfTracks()
for i = 1:length(tracks)
bbox = tracks(i).bbox;
% Predict the current location of the track.
predictedCentroid = predict(tracks(i).kalmanFilter);
% Shift the bounding box so that its center is at
% the predicted location.
predictedCentroid = int32(predictedCentroid) - bbox(3:4) / 2;
tracks(i).bbox = [predictedCentroid, bbox(3:4)];
end
end
%% Assign Detections to Tracks
% Assigning object detections in the current frame to existing tracks is
% done by minimizing cost. The cost is defined as the negative
% log-likelihood of a detection corresponding to a track.
%
% The algorithm involves two steps:
%
% Step 1: Compute the cost of assigning every detection to each track using
% the |distance| method of the |vision.KalmanFilter| System object(TM). The
% cost takes into account the Euclidean distance between the predicted
% centroid of the track and the centroid of the detection. It also includes
% the confidence of the prediction, which is maintained by the Kalman
% filter. The results are stored in an MxN matrix, where M is the number of
% tracks, and N is the number of detections.
%
% Step 2: Solve the assignment problem represented by the cost matrix using
% the |assignDetectionsToTracks| function. The function takes the cost
% matrix and the cost of not assigning any detections to a track.
%
% The value for the cost of not assigning a detection to a track depends on
% the range of values returned by the |distance| method of the
% |vision.KalmanFilter|. This value must be tuned experimentally. Setting
% it too low increases the likelihood of creating a new track, and may
% result in track fragmentation. Setting it too high may result in a single
% track corresponding to a series of separate moving objects.
%
% The |assignDetectionsToTracks| function uses the Munkres' version of the
% Hungarian algorithm to compute an assignment which minimizes the total
% cost. It returns an M x 2 matrix containing the corresponding indices of
% assigned tracks and detections in its two columns. It also returns the
% indices of tracks and detections that remained unassigned.
function [assignments, unassignedTracks, unassignedDetections] = ...
detectionToTrackAssignment()
nTracks = length(tracks);
nDetections = size(centroids, 1);
% Compute the cost of assigning each detection to each track.
cost = zeros(nTracks, nDetections);
for i = 1:nTracks
cost(i, :) = distance(tracks(i).kalmanFilter, centroids);
end
% Solve the assignment problem.
costOfNonAssignment = 20;
[assignments, unassignedTracks, unassignedDetections] = ...
assignDetectionsToTracks(cost, costOfNonAssignment);
end
%% Update Assigned Tracks
% The |updateAssignedTracks| function updates each assigned track with the
% corresponding detection. It calls the |correct| method of
% |vision.KalmanFilter| to correct the location estimate. Next, it stores
% the new bounding box, and increases the age of the track and the total
% visible count by 1. Finally, the function sets the invisible count to 0.
function updateAssignedTracks()
numAssignedTracks = size(assignments, 1);
for i = 1:numAssignedTracks
trackIdx = assignments(i, 1);
detectionIdx = assignments(i, 2);
centroid = centroids(detectionIdx, :);
bbox = bboxes(detectionIdx, :);
% Correct the estimate of the object's location
% using the new detection.
correct(tracks(trackIdx).kalmanFilter, centroid);
% Replace predicted bounding box with detected
% bounding box.
tracks(trackIdx).bbox = bbox;
% Update track's age.
tracks(trackIdx).age = tracks(trackIdx).age + 1;
% Update visibility.
tracks(trackIdx).totalVisibleCount = ...
tracks(trackIdx).totalVisibleCount + 1;
tracks(trackIdx).consecutiveInvisibleCount = 0;
end
end
%% Update Unassigned Tracks
% Mark each unassigned track as invisible, and increase its age by 1.
function updateUnassignedTracks()
for i = 1:length(unassignedTracks)
ind = unassignedTracks(i);
tracks(ind).age = tracks(ind).age + 1;
tracks(ind).consecutiveInvisibleCount = ...
tracks(ind).consecutiveInvisibleCount + 1;
end
end
%% Delete Lost Tracks
% The |deleteLostTracks| function deletes tracks that have been invisible
% for too many consecutive frames. It also deletes recently created tracks
% that have been invisible for too many frames overall.
function deleteLostTracks()
if isempty(tracks)
return;
end
invisibleForTooLong = 20;
ageThreshold = 8;
% Compute the fraction of the track's age for which it was visible.
ages = [tracks(:).age];
totalVisibleCounts = [tracks(:).totalVisibleCount];
visibility = totalVisibleCounts ./ ages;
% Find the indices of 'lost' tracks.
lostInds = (ages < ageThreshold & visibility < 0.6) | ...
[tracks(:).consecutiveInvisibleCount] >= invisibleForTooLong;
% Delete lost tracks.
tracks = tracks(~lostInds);
end
%% Create New Tracks
% Create new tracks from unassigned detections. Assume that any unassigned
% detection is a start of a new track. In practice, you can use other cues
% to eliminate noisy detections, such as size, location, or appearance.
function createNewTracks()
centroids = centroids(unassignedDetections, :);
bboxes = bboxes(unassignedDetections, :);
for i = 1:size(centroids, 1)
centroid = centroids(i,:);
bbox = bboxes(i, :);
% Create a Kalman filter object.
kalmanFilter = configureKalmanFilter('ConstantVelocity', ...
centroid, [200, 50], [100, 25], 100);
% Create a new track.
newTrack = struct(...
'id', nextId, ...
'bbox', bbox, ...
'kalmanFilter', kalmanFilter, ...
'age', 1, ...
'totalVisibleCount', 1, ...
'consecutiveInvisibleCount', 0);
% Add it to the array of tracks.
tracks(end + 1) = newTrack;
% Increment the next id.
nextId = nextId + 1;
end
end
%% Display Tracking Results
% The |displayTrackingResults| function draws a bounding box and label ID
% for each track on the video frame and the foreground mask. It then
% displays the frame and the mask in their respective video players.
function displayTrackingResults()
% Convert the frame and the mask to uint8 RGB.
frame = im2uint8(frame);
mask = uint8(repmat(mask, [1, 1, 3])) .* 255;
minVisibleCount = 8;
if ~isempty(tracks)
% Noisy detections tend to result in short-lived tracks.
% Only display tracks that have been visible for more than
% a minimum number of frames.
reliableTrackInds = ...
[tracks(:).totalVisibleCount] > minVisibleCount;
reliableTracks = tracks(reliableTrackInds);
% Display the objects. If an object has not been detected
% in this frame, display its predicted bounding box.
if ~isempty(reliableTracks)
% Get bounding boxes.
bboxes = cat(1, reliableTracks.bbox);
% Get ids.
ids = int32([reliableTracks(:).id]);
% Create labels for objects indicating the ones for
% which we display the predicted rather than the actual
% location.
labels = cellstr(int2str(ids'));
predictedTrackInds = ...
[reliableTracks(:).consecutiveInvisibleCount] > 0;
isPredicted = cell(size(labels));
isPredicted(predictedTrackInds) = {' predicted'};
labels = strcat(labels, isPredicted);
% Draw the objects on the frame.
frame = insertObjectAnnotation(frame, 'rectangle', ...
bboxes, labels);
% Draw the objects on the mask.
mask = insertObjectAnnotation(mask, 'rectangle', ...
bboxes, labels);
end
end
% Display the mask and the frame.
obj.maskPlayer.step(mask);
obj.videoPlayer.step(frame);
end
%% Summary
% This example created a motion-based system for detecting and
% tracking multiple moving objects. Try using a different video to see if
% you are able to detect and track objects. Try modifying the parameters
% for the detection, assignment, and deletion steps.
%
% The tracking in this example was solely based on motion with the
% assumption that all objects move in a straight line with constant speed.
% When the motion of an object significantly deviates from this model, the
% example may produce tracking errors. Notice the mistake in tracking the
% person labeled #12, when he is occluded by the tree.
%
% The likelihood of tracking errors can be reduced by using a more complex
% motion model, such as constant acceleration, or by using multiple Kalman
% filters for every object. Also, you can incorporate other cues for
% associating detections over time, such as size, shape, and color.
displayEndOfDemoMessage(mfilename)
end