A model-based method for indoor mobile robot
localization using monocular vision and straight-line correspondences
Omar Ait Aider, Philippe Hoppenot and Etienne Colle
Laboratoire Systèmes Complexes -
University of Evry, France, oaider|hoppenot|ecolle@iup.univ-evry.fr
Abstract
A
model-based method for indoor mobile robot localization is presented herein;
this method relies on monocular vision and uses straight-line correspondences.
A classical four-step approach has been adopted (i.e. image acquisition, image
feature extraction, image and model feature matching, and camera pose
computing). These four steps will be discussed with special focus placed on the
critical matching problem. An efficient and simple method for searching image
and model feature correspondences, which has been designed for indoor mobile
robot self-location, will be highlighted: this is a three-stage method based on
the interpretation tree search approach. During the first stage, the
correspondence space is reduced by virtue of splitting the navigable space into
view-invariant regions. In making use of the specificity of the mobile robotics
frame of reference, the global interpretation tree is divided into two
sub-trees; two low-order geometric constraints are then defined and applied directly
on 2D-3D correspondences in order to improve pruning and search efficiency.
During the last stage, the pose is calculated for each matching hypothesis and
the best one is selected according to a defined error function. Test results
illustrate the performance of this approach.