TY - GEN
T1 - Robust click-point linking
T2 - 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
AU - Okada, Kazunori
AU - Huang, Xiaolei
N1 - Copyright:
Copyright 2008 Elsevier B.V., All rights reserved.
PY - 2007
Y1 - 2007
N2 - This paper presents robust click-point linking: a novel localized registration framework that allows users to interactively prescribe where the accuracy has to be high. By emphasizing locality and interactivity, our solution is faithful to how the registration results are used in practice. Given a user-specified point, the click-point linking provides a single point-wise correspondence between a data pair. In order to link visually dissimilar local regions, a correspondence is sought by using only geometrical context without comparing the local appearances. Our solution is formulated as a maximum likelihood estimation (MLE) without estimating a domain transformation explicitly. A spatial likelihood of Gaussian mixture form is designed to capture geometrical configurations between the point-of-interest and a hierarchy of global-to-local 3D landmarks that are detected using machine learning and entropy based feature detectors. A closed-form formula is derived to specify each Gaussian component by exploiting geometric invariances under specific group of domain transformation via RANSAC-like random sampling. A mean shift algorithm is applied to robustly and efficiently solve the local. MLEproblem, replacing the standard consensus step of the RANSAC. Two transformation groups of pure translation and scaling/translation are considered in this paper. We test feasibility of the proposed approach with 16 pairs of whole-body CT data, demonstrating the effectiveness.
AB - This paper presents robust click-point linking: a novel localized registration framework that allows users to interactively prescribe where the accuracy has to be high. By emphasizing locality and interactivity, our solution is faithful to how the registration results are used in practice. Given a user-specified point, the click-point linking provides a single point-wise correspondence between a data pair. In order to link visually dissimilar local regions, a correspondence is sought by using only geometrical context without comparing the local appearances. Our solution is formulated as a maximum likelihood estimation (MLE) without estimating a domain transformation explicitly. A spatial likelihood of Gaussian mixture form is designed to capture geometrical configurations between the point-of-interest and a hierarchy of global-to-local 3D landmarks that are detected using machine learning and entropy based feature detectors. A closed-form formula is derived to specify each Gaussian component by exploiting geometric invariances under specific group of domain transformation via RANSAC-like random sampling. A mean shift algorithm is applied to robustly and efficiently solve the local. MLEproblem, replacing the standard consensus step of the RANSAC. Two transformation groups of pure translation and scaling/translation are considered in this paper. We test feasibility of the proposed approach with 16 pairs of whole-body CT data, demonstrating the effectiveness.
UR - http://www.scopus.com/inward/record.url?scp=34948878075&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34948878075&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2007.383360
DO - 10.1109/CVPR.2007.383360
M3 - Conference contribution
AN - SCOPUS:34948878075
SN - 1424411807
SN - 9781424411801
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
BT - 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
Y2 - 17 June 2007 through 22 June 2007
ER -