I have found some basic working examples on stitching via OpenCV for panoramic images. I have also found some useful documentation in the API docs, but I can't find out how to speed up the processing by providing additional information.
In my case, I generate a set of images in a 20x20 grid of individual frames, for a total of 400 images to be stitched into a single large one. This takes an enormous amount of time on a modern PC, so it would likely take hours on a developer board.
Is there any way to tell the OpenCV instance information about the images, such as me knowing in advance the relative positioning of all the images as they would appear on a grid? The only API calls I see so far is to just add all the images indiscriminately to a queue via vImg.push_back()
.
References
<http://docs.opencv.org/modules/stitching/doc/stitching.html>
<http://feelmare.blogspot.ca/2013/11/opencv-stitching-example-stitcher-class.html>
<http://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/>
I did some work with the stitching pipeline and though I do not consider myself an expert on the field, I did get better performance (and better results as well) adjusting each step of the pipeline separately. As you can see in the picture, the Stitching class is nothing but a wrapper of this pipeline:
Some interesting parts you can adjust are the resizing steps (there comes a point were more resolution just means more computation time and more inaccurate features), the matching process and (though this is just a guess) giving a good camera parameters instead of performing an estimation. This involves getting the camera parameters before doing the stitching, but it is not really hard. Here you have some reference: OpenCV Camera Calibration and 3D Reconstruction.
Again: I am not an expert, this is just based on my experience as an intern doing some experiments with the library!