Rainer
Kummerle, Bastian Steder, Christian Donhege, Michael Ruhnke, Giorgio Grisetti,
Cyrill Stachniss, Alexander Kleiner
Autonomous
Robots, No. 27, 2009
Summary
The paper
proposes an objective benchmark for evaluating SLAM approaches. SLAM
(simultaneous localization and mapping) or CML (concurrent mapping
localization) is a methodology for learning maps used for mobile robot
applications. The problem is considered to be complex since the robot requires
a consistent map and has to be able to have a system which localize itself in
the map. Different SLAM techniques exist and comparison is often done with
visual comparison, specially when grid based maps are available, but still a
common performance metric must be applied. The comparison of three prominent
mapping techniques is taken into account: scan-matching, SLAM using
Rao-Blackwellized particle filter and a maximum likelihood SLAM approach.
Mapping techniques for mobile robots are classified according to the estimation
technique (Kalman filters, sparse extended information filters and least square error minimization approaches
are the most common). Activities related to performance metrics are mainly
divided in three groups: competition settings where robot systems are competing
within a defined scenario (for example the famous “Robo-Cup”), collections of
publicly available datasets that are provided for comparison and related
publications for methodologies of scoring metric for making proper comparisons.
Benchmarking of system from datasets has reached a mature level in those
researches in the visual systems, their purpose is to validate image
annotation, range image segmentation, stereo vision and correspondence
algorithms, there are some well known websites online providing free datasets
with ground-truth data. Also different methodologies for analysis datasets are
applied as for example Monte Carlo Localization (MCL) for matching 3D scans
against a reference map. The authors believe that comparing the absolute error
between two tracks might not yield a meaningful assertion in all cases. The
final method used for benchmarking is through the following formula for
calculating the error (metric error): ε(δ)=(1/N)Σtrans(δi,j ⊕δi,j*)2+rot(δi,j ⊖δi,j*)2
, where N is the number of relative relations, trans
and rot respectively separate and weight the translational and rotational
components and δi,j are the relative displacements, they are used in
order to consider the transformation energy instead of just evaluating different
between two absolute displacements, this to avoid a suboptimal solution. Relative
displacements are selected according to accuracy levels desired and therefore
the user can highlight certain properties. For indoor problem Global Position
Systems of course cannot be used and therefore suggested is the Symeo System,
which is capable of working in indoors with an accuracy around 5 to 10 cm. The
initial position is based on initial guess, starting from that point laser
range finder are used for detection, at the beginning long human input work has
to be done in order to eliminate wrong hypothesis. For outdoor applications GPS
might be possible, but is appears to noisy and not working in certain
environments, therefor satellite pictures appear to be satisfying the problem
with ground-truth. In this environment, Monte-Carlo localization framework is
used to prevent the system from introducing inconsistent prior information, at
page 394 the algorithm for implementation of MCL is introduced. Benchmarking without
trajectory estimates (for SLAM where trajectory is estimated together with
mapping) then or the trajectory is recovered through sensors or the metric
proposed above can be used on landmarks.
Key Concepts
SLAM
Key Results
Experiments have been performed on the three methods mentioned previously,
the first appears to be good for small environments, the second is related to
the number of samples and the last is not working well in presence of noise,
although it is slightly better than the second for mapping problems, where this
is not able to work in outdoor conditions. In conclusion the method appear to
be extremely useful, mainly because through data visualization, one can
understand where, why and when a determinate algorithm fails in its estimation.
No comments:
Post a Comment