When images are close enough, one can use still another algorithm: the Iterative Closest Point [5,41]: The basic principle is the following. For each scene point, we look for the closest point in the model with the current transformation, compute a new rigid transformation with these matches, and iterate the process until convergence.
Of course, since we have more geometric information than just the point position, we use a generalization: the Iterative Closest Feature . The idea is to use a higher dimensional space for the closest point search. In our case, the space is made of the extremal point position, the trihedron (n, 11, t2), and the unary invariants k1 and k2. The important point is to set an appropriate metric on this space in order to combine efficiently the different units of measurement. In our algorithm, this is done using the inverse of the covariance matrix of the features. This matrix can be reestimated after convergence and the whole process iterated. However, we did not observe a critical influence of the covariance matrix values, as soon as it is approximately respecting the variation range of the different components.
Was this article helpful?