SfM consists of estimating 3D structures from 2D image sequences. In a first step it is mandatory to find correspondences between images using features such as corner points, which must be tracked from one image to the next. These trajectories are used to refine the position of world points and the camera's motion at the same time.
A kernel is an association between three elements:
The reprojection error was defined by @hartley2003multiple. It is the geometric error computed as the distance between a projected point and the real point over an image. It is used to quantify how closely an estimate of a 3D point $\hat{\mathbf{x}}$ recreates the point's true projection $\mathbf{x}$:
The function $d$ can vary according to the application. For instance, in multiview stereo applications the problem consists of seeking a homography $\hat{H}$ given a set of pairs of perfectly matched points $\hat{x}_i$ and $\hat{x}'_i$ that minimizes the total error function:
This algorithm uses only five point correspondences between two calibrated views to estimate the relative camera pose. It gives up to ten real solutions (candidates) which must be filtered. Only the solutions whose 3D point cloud is in front of the camera and not rotated are chosen. This is termed as the check of the cheirality of a 3D point with respect to the camera, which is defined as the property of a point that specifies that it lies in front or behind a given camera.
There are two basic estimators for the essential matrix $\mathbf{E}$:
Fortunately, OpenCV and libmv are both based on the same Linear Algebra library (Eigen3) so it is not too difficult to achieve interoperability between them.
Bundle adjustment is an optimization procedure that refines an initial camera and structure parameter estimations for finding the values that most accurately predict the locations of the observed points in the set of available images. The problem is stated as:
Bundle adjustment minimizes the total reprojection error with respect to all 3D point and camera parameters:
where $\mathbf{Q}(\mathbf{a}_j, \mathbf{b}_i)$ is the predicted projection of $p_i$ on $I_j$ and $d(\mathbf{x}, \mathbf{y})$ denotes the Euclidean distance between the image points $\mathbf{x}$ and $\mathbf{y}$.
The Levenberg–Marquardt algorithm (LMA) solves the non-linear least squares (NLLS) problem. It interpolates between the Gauss–Newton algorithm and the method of gradient descent. The LMA is more robust than the Gauss-Newton finding a local minimum (not global) even if it starts very far off the minimum. However, it tends to be slower than the Gauss-Newton depending on the starting parameters. LMA can also be viewed as Gauss Newton using a trust region approach.
It is applied to solve curve fitting problems such as: given a set of $m$ empirical datum pairs of independent and dependent variables $(x_i, y_i)$, optimize the parameters $\beta$ of the model curve $f(x, \boldsymbol \beta)$ so that the sum of the squares of the deviations becomes minimal.