I use this to functions as a base of my tracking algorithm.
//1. detect the features
cv::goodFeaturesToTrack(gray_prev, // the image
features, // the output detected features
max_count, // the maximum number of features
qlevel, // quality level
minDist); // min distance between two features
// 2. track features
cv::calcOpticalFlowPyrLK(
gray_prev, gray, // 2 consecutive images
points_prev, // input point positions in first im
points_cur, // output point positions in the 2nd
status, // tracking success
err); // tracking error
cv::calcOpticalFlowPyrLK
takes vector of points from the previous image as input, and returns appropriate points on the next image. Suppose I have random pixel (x, y) on the previous image, how can I calculate position of this pixel on the next image using OpenCV optical flow function?
As you write, cv::goodFeaturesToTrack
takes an image as input and produces a vector of points which it deems "good to track". These are chosen based on their ability to stand out from their surroundings, and are based on Harris corners in the image. A tracker would normally be initialised by passing the first image to goodFeaturesToTrack and obtaining a set of features to track. These features could then be passed to cv::calcOpticalFlowPyrLK
as the previous points, along with the next image in the sequence and it will produce the next points as output, which then become input points in the next iteration.
If you want to try to track a different set of pixels (rather than features generated by cv::goodFeaturesToTrack
or a similar function), then simply provide these to cv::calcOpticalFlowPyrLK
along with the next image.
Very simply, in code:
// Obtain first image and set up two feature vectors
cv::Mat image_prev, image_next;
std::vector<cv::Point> features_prev, features_next;
image_next = getImage();
// Obtain initial set of features
cv::goodFeaturesToTrack(image_next, // the image
features_next, // the output detected features
max_count, // the maximum number of features
qlevel, // quality level
minDist // min distance between two features
);
// Tracker is initialised and initial features are stored in features_next
// Now iterate through rest of images
for(;;)
{
image_prev = image_next.clone();
feature_prev = features_next;
image_next = getImage(); // Get next image
// Find position of feature in new image
cv::calcOpticalFlowPyrLK(
image_prev, image_next, // 2 consecutive images
points_prev, // input point positions in first im
points_next, // output point positions in the 2nd
status, // tracking success
err // tracking error
);
if ( stopTracking() ) break;
}