I'm trying to use the OpenCV stitcher class to stitch multiple frames from a stereo setup, in which neither camera moves. I'm getting poor stitching results when running across multiple frames. I've tried a few different ways, which I'll try to explain here.
Using stitcher.stitch( )
Given a stereo pair of views, I ran the following code for some frames (VideoFile
is a custom wrapper for the OpenCV VideoCapture
object):
VideoFile f1( ... );
VideoFile f2( ... );
cv::Mat output_frame;
cv::Stitcher stitcher = cv::Stitcher::createDefault(true);
for( int i = 0; i < num_frames; i++ ) {
currentFrames.push_back(f1.frame( ));
currentFrames.push_back(f2.frame( ));
stitcher.stitch( currentFrames, output_mat );
// Write output_mat, put it in a named window, etc...
f1.next_frame();
f2.next_frame();
currentFrames.clear();
}
This gave really quite good results on each frame, but since the parameters are estimated each frame put in a video you could see the small differences in stitching where the parameters were slightly different.
Using estimateTransform( )
& composePanorama( )
To get past the problem of the above method, I decided to try estimating the parameters only on the first frame, then use composePanorama( )
to stitch all subsequent frames.
for( int i = 0; i < num_frames; i++ ) {
currentFrames.push_back(f1.frame( ));
currentFrames.push_back(f2.frame( ));
if( ! have_transform ) {
status = stitcher.estimateTransform( currentFrames );
}
status = stitcher.composePanorama(currentFrames, output_frame );
// ... as above
}
Sadly there appears to be a bug (documented here) causing the two views to move apart in a very odd way, as in the images below:
Frame 1:
Frame 2:
...
Frame 8:
Clearly this is useless, but I thought it may be just because of the bug, which basically keeps multiplying the intrinsic parameter matrix by a constant each time composePanorama()
is called. So I did a minor patch on the bug, stopping this from happening, but then the stitching results were poor. Patch below (modules/stitching/src/stitcher.cpp
), results afterwards:
243 for (size_t i = 0; i < imgs_.size(); ++i)
244 {
245 // Update intrinsics
246 // change following to *=1 to prevent scaling error, but messes up stitching.
247 cameras_[i].focal *= compose_work_aspect;
248 cameras_[i].ppx *= compose_work_aspect;
249 cameras_[i].ppy *= compose_work_aspect;
Results:
Does anyone have a clue how I can fix this problem? Basically I need to work out the transformation once, then use it on the remaining frames (we're talking 30mins of video).
I'm ideally looking for some advice on patching the stitcher class, but I would be willing to try handcoding a different solution. An earlier attempt which involved finding SURF points, correlating them and finding the homography gave fairly poor results compared to the stitcher class, so I'd rather use it if possible.
So in the end, I hacked about with the stitcher.cpp code and got something close to a solution (but not perfect as the stitching seam still moves about a lot so your mileage may vary).
Changes to stitcher.hpp
Added a new function setCameras()
at line 136:
void setCameras( std::vector<detail::CameraParams> c ) {
this->cameras_ = c;
}`
Added a new private member variable to keep track of whether this is our first estimation:
bool _not_first;
Changes to stitcher.cpp
In estimateTransform()
(line ~100):
this->not_first = 0;
images.getMatVector(imgs_);
// ...
In composePanorama()
(line ~227):
// ...
compose_work_aspect = compose_scale / work_scale_;
// Update warped image scale
if( !this->not_first ) {
warped_image_scale_ *= static_cast<float>(compose_work_aspect);
this->not_first = 1;
}
w = warper_->create((float)warped_image_scale_);
// ...
Code calling stitcher
object:
So basically, we create a stitcher object, then get the transform on the first frame (storing the camera matrices outside the stitcher class). The stitcher will then break the Intrinsic Matrix somewhere along the line causing the next frame to mess up. So before we process it, we just reset the cameras using the ones we extracted from the class.
Be warned, I had to have some error checking in case the stitcher couldn't produce an estimation with the default settings - you may need to iteratively decrease the confidence threshold using setPanoConfidenceThresh(...)
before you get a result.
cv::Stitcher stitcher = cv::Stitcher::createDefault(true);
std::vector<cv::detail::CameraParams> cams;
bool have_transform = false;
for( int i = 0; i < num_frames; i++ ) {
currentFrames.push_back(f1.frame( ));
currentFrames.push_back(f2.frame( ));
if( ! have_transform ) {
status = stitcher.estimateTransform( currentFrames );
have_transform = true;
cams = stitcher.cameras();
// some code to check the status of the stitch and handle errors...
}
stitcher.setCameras( cams );
status = stitcher.composePanorama(currentFrames, output_frame );
// ... Doing stuff with the panorama
}
Please be aware that this is very much a hack of the OpenCV code, which is going to make updating to a newer version a pain. Unfortunately I was short of time so a nasty hack was all I could get round to!