I am developing an Augmented Reality SDK on OpenCV. I had some problems to find tutorials on the topic, which steps to follow, possible algorithms, fast and efficient coding for real-time performance etc.
So far I have gathered the next information and useful links.
Download latest release version.
You can find installation guides here (platforms: linux, mac, windows, java, android, iOS).
Online documentation.
For begginers here is a simple augmented reality code in OpenCV. It is a good start.
For anyone searching for a well designed state-of-the-art SDK I found some general steps that every augmented-reality based on marker tracking should have, considering OpenCV functions.
Main program: creates all classes, initialization, capture frames from video.
AR_Engine class: Controls the parts of an augmented reality application. There should be 2 main states:
Also there should be some algorithms for finding the position and orientation of the camera in every frame. This is achieve by detecting the homography transformation between the marker detected in the scene, and a 2D image of the marker we have processed offline. The explanation of this method here (page 18). The main steps for Pose Estimations are:
Load camera Intrinsic Parameters. Previously extracted offline through calibration.
Load the pattern (marker) to track: It is an image of the planar marker we are going to track. It is necessary to extract features and generate descriptors (keypoints) for this pattern so later we can compare with features from the scene. Algorithms for this task:
For every frame update, run a detection algorithm for extracting features from the scene and generate descriptors. Again we have several options.
Find matches between pattern and the scene descriptors.
Find Homography matrix from those matches. RANSAC can be used before to find inliers/outliers in the set of matches.
Extract Camera Pose from homography.