Shuzhi S. Ge, C.C. Huang, L.C. Woon
IEEE Transactions on Industrial
Electronics, VOL.44, NO.6, December 1997
Summary
Vision sensors are extremely important for tracking
an mapping solutions in robotics, ego-motion (camera motion) and
omnidirectional cameras appear to be an interesting application for fulfilling
this field requirements’. Wide angle imagine has already been integrated in
autonomous navigation systems, the paper discusses the use of omnidirectional
cameras for the recovery of observer motion (ego-motion). The ego motion
problem has been solved principally with the use of a two step algorithm:
motion field estimation (computation of optic flow) and motion field analysis
(extraction of camera translation and rotation from the optic flow);
unfortunately the last step is sensitivity to noisy estimates of optic flow. It
is proved that a large field of view facilitates the computation of observer
motion, it is a spherical field view which allow both the focus expansion and
the contraction to exist in the image, therefore the authors use a spherical
perspective projection model since it is convenient for a view greater than 180
degrees. An image system may be wanted with a single center of projection since
it ensures generation of pure perspectives images and the image velocity
vectors onto a sphere. There are different methods for wide-angle imaging:
rotating imaging systems (good for static conditions), fish eye lenses (which
presents problems in obtaining a single center of projection) and catadioptric
systems (which incorporates reflecting mirrors and in different cases appeared
to be successful). Hyperbolic and parabolic catadioptric systems re both able
to capture at least a hemisphere of viewing directions about a single point of
view; knowing the geometry of these systems and applying spherical
representation with the use of Jacobian transformations, it is possible to map
the image velocity vectors.
·
Model
The motion field is defined
as the projection of 3D velocity vectors onto a 2D surface, the rigid motion of
a scene point P relative to a moving camera is then defined as: P’=-T-Ω X
P, where Pˆ is the projection of scene P onto the sphere; the next step is then
to derive the projection function with respect to time and substitute it in the
equation mentioned above, obtaining therefore U(Pˆ), the velocity vector. The
ego-motion problem is regarding the estimation of Ω, T
and Ui, for this 3 algorithms are mentioned in the literature: Bruss and Horn, Zhuang et al., Jepson and
Heeger (please refer to page 1001 and 1002 for the algorithms). In order to
map motion in the image on the sphere we need to make the transformation of the
image points on the sphere and use the Jacobian to map the image velocities.
Coordinate
θ is for the polar angle between
z-axis and the incoming ray and φ is the azimuth angle,
while x and y are the rectangular coordinates of the system with center in the
origin at the center of image. Since the image plane is parallel to the x-y plane φ
is
to be considered always the same for all sensors ( φ-arctan(y/x)
), while
the polar angle changes according is in use the parabolic omnidirectional
system, the hyperbolic omnidirectional camera or the fish-eye lenses (which
doesn’t have a single center of projection and introduces small errors).
Finally U=SJ[dx/dt dy/dt]T, where S is the
transformation on the sphere and J the Jacobian matrix, each sensor has its own
Jacobian matrix.
Key
Concepts
Robot vision, Computer
Vision, Omnidirectional Cameras, Ego-Motion
Key Results
The camera with the three
model coming from the literature has been tested, showing that the non-linear
algorithm (Bruss and Horn) is more accurate and more stable than the linear
algorithms. The authors with this paper proved that these algorithms, although
originally design for planar perspective cameras, can be adapted to
omnidirectional cameras by mapping the optic flow field to a sphere via the use
of an appropriate Jacobian matrix.