Masami Iwatsuki, Norimitsu Okiyama
IEEE Transactions on Robotics, Vol.21,
No.2, April 2005
Summary
Visual servoing, a technique for controlling a
robot manipulator through feedbacks from visual sensors, is known to be
flexible and robust and can derive robot motion directly from 2D visual data. A
great advantage is in the fact that positioning accuracy of the robot is less
sensitive to calibration errors (both of the robot and the camera) and image
calibration errors. Unfortunately
monocular servoing that uses points as the primitives has the so-called camera retreat problem, when a rotation
around the optical axis causes the camera moving backwards, in order to
overcome this problem in the literature presents the use of straight lines as
primitives or hybrid approaches to decouple translational and rotational
components; the authors propose a faster and more way for solving the problem
by using cylindrical coordinated into the formulation of visual servoing and
arbitrarily shifting the position of the origin (since its coincidence with the
optical axis makes the coordinate system workable only in the rotation case).
The problem consist in transforming the Cartesian coordinate system into a
cylindrical coordinate system, considering x and y parallel to the image plane
and z parallel to the optical axis. Regarding the Cartesian approach, Pi
is the point projected onto the image plane as pi(x,y)=[x y]T and
the image plane velocity of feature point pi appears to be: p’i(x,y)=Jir,
where J is the Jacobian an r is the velocity screw velocity: r=[vx vy
vz ωx
ωy ωz]. A
control law can then be defined by calculating the error as the vector obtained
by the different of the current feature point and the desired feature point and
minimizing it through the following law: r_=-λJ+e(x,y),
where λ is a constant gain and J+
is the pseudoinverse of J.
·
Method
In
the case of a cylindrical coordinate system, we are in presence of a case in
which (ξ,η) is representing the origin-shift parameters and x’ and y’
appear then to be coordinate of the transformed image feature pi
described previously. For the new cylindrical system p’(ρ,φ) has to be computed by applying the rotation matrix
to the image plane velocity. As for previously computation, the Jacobian matrix
and the velocity screw are introduced in the same fashion, by keeping into
account the presence of the rotational matrix which previously wasn’t used.
Regarding the control law, as for before the error vector ei can be
computer *this time using the radius and the argument as coordinate) and in a
similar fashion we obtain ȓ=-λUJ+e(ρ,φ), where U is the orthogonal matrix.
It
is proven that in the Cartesian approach is a particular case of cylindrical
representation with ξ which tends to minus infinite
and η which tends to 0. The paper introduces an approach
for deciding the value of the origin-shift parameters. If we consider
normalizing into m the homogenous coordinates of the image plane position of
the feature point, we will then be able to obtain the LS error E(R)=sum(for I
from 1 to n)[mgi-Rmsi]2, where mgi
is the initiale image plane position at feature i and msi is the
desired image plane position of feature i. R is the rotation matrix given by
the multiplication of V and U, which are orthogonal matrices computed by the
singular value decomposition of the correlation matrix M. From the rotation
matrix we can obtain the orientation of the axis of rotation and therefore p0=[lx/lz
ly/lz] (where li is the orientation for axis i
of the rotational axis). The coordinate of p0 appear to be the ones
interested for obtaining the origin-shift parameters location.
Key
Concepts
Machine Vision, Visual
Servoing
Key Results
The cylindrical system
with shiftable origin has been tested and compared with the Cartesian system,
demonstrating to be the most efficient camera motion, working in translation,
rotation, combination in 2D, 3D and in 3D general motion with non-coplanar
feature points.
No comments:
Post a Comment