Substructure Matching with Frame Features

We came back in [15] to geometric hashing, but the idea was to use all the geometric information on the features while taking great care of the uncertainty handling for the algorithm to be robust (see [30] for an analysis of recognition algorithms with uncertain geometric features). In addition to the point's position, we can add the normal vector n and the two principal directions t1 and t2 of the surface to constitute a local coordinate system, or a frame.

In this context, each medical image is modeled by a set of frames and the matching problem is to find the correspondences between two subsets of frames that are in the same configuration in the two "images," up to a global rigid transformation.

Invariant Representation: Preprocessing Step To obtain an invariant representation with respect to the global position and orientation of the considered structure, we can express the configuration of all frames relative to one frame (called the basis). For efficiency, this representation is stored in a hash table and, for correctness, we include the uncertainty of each invariant. As only part of the frames are in the same configuration in the two images, the one chosen as the basis may not be present in the other image. The preprocessing step is thus repeated with each frame as the basis.

Recognition Step Choosing a frame of the second structure (the scene) as the basis, we compute the invariant representation and retrieve, thanks to the hash table, what are the compatible model frame couples. If the basis belongs to a common substructure, then a significant number of frames are in the same configuration with respect to it. We then match the model and scene bases (Fig. 6).

This process is repeated for every extremal point as the basis to find its possible matches in the model and we only keep the matches that are above a given threshold (typically 10% of the number of extremal points).

Clustering Compatible Matches and Verification For each individual match, we maintain during the recognition step an estimation of the associated transformation by fusing the transformations between confirming frames. To group matches belonging to the same (rigid) substructure, we run a very simple clustering algorithm on the associated transformation. Indeed, up to measurement errors, frames should undergo a similar transformation within a single substructure. Each cluster is then refined by an iterative closest neighbor technique where we enforce symmetry of the matches and verify their validity with a y2 test.

Matching Crest Lines In order to reduce once again the complexity of the algorithm, we exploited in this method the structure of the extremal points: They belong to crest lines. The principle is to consider each model crest line as a different object. Indexing all model lines in the same hash table, we retrieve at recognition time the model lines corresponding to each scene crest line.

However, different crest line matches can correspond to different transformations. Thus, we run once again our clustering algorithm on the transformations to find out the compatible line matches and we obtain a single transformation from the model to the scene image.

Was this article helpful?

0 0

Post a comment