Ilustration of multimodal image matching method. (Image from the original paper)
[Research abstract]Although a number of local feature-based methods have been proposed, the multimodality matching is still a challenging problem in object recognition, remote sensing and medical image processing where the image contrast is significantly different. The local feature-based multimodality matching method is usually intensity-based, so the matching performance is not good enough because intensity-based method is sensitive to contrast variations. In order to solve these problems, we propose a novel Multimodality Robust Line Segment Descriptor (MRLSD) and develop a MRLSD matching method. The proposed method generates MRLSD descriptors based on extracted highly equivalent corners and line segments for two multimodal images, and then performs image matching by measuring the similarity of corresponding descriptors over two images. The proposed corner and line segment extraction method is based on local phase and direction information, and is insensitive to contrast variations, so the MRLSD descriptor is robust to modality variations. The MRLSD descriptor is rotation invariant by selecting circular feature sub-regions and projecting feature vectors to radial direction. The MRLSD descriptor achieves scale invariance by adjusting the radius of circular feature region according to the scale. Experimental results indicate that the proposed method achieves higher precision and repeatability than several state-of-the-art local feature-based multimodality matching methods, and also demonstrate its robustness to multimodal images.
This study was published on Neurocomputing,2016,177:290-303. titled Multimodal image matching based on multimodality robust line segment descriptor.