Follow me

Sunday, September 30, 2012

Adaptive Neural Network Control of Robot Manipulators in Task Space


Shuzhi S. Ge, C.C. Huang, L.C. Woon
IEEE Transactions on Industrial Electronics, VOL.44, NO.6, December 1997
Summary
                  Vision sensors are extremely important for tracking an mapping solutions in robotics, ego-motion (camera motion) and omnidirectional cameras appear to be an interesting application for fulfilling this field requirements’. Wide angle imagine has already been integrated in autonomous navigation systems, the paper discusses the use of omnidirectional cameras for the recovery of observer motion (ego-motion). The ego motion problem has been solved principally with the use of a two step algorithm: motion field estimation (computation of optic flow) and motion field analysis (extraction of camera translation and rotation from the optic flow); unfortunately the last step is sensitivity to noisy estimates of optic flow. It is proved that a large field of view facilitates the computation of observer motion, it is a spherical field view which allow both the focus expansion and the contraction to exist in the image, therefore the authors use a spherical perspective projection model since it is convenient for a view greater than 180 degrees. An image system may be wanted with a single center of projection since it ensures generation of pure perspectives images and the image velocity vectors onto a sphere. There are different methods for wide-angle imaging: rotating imaging systems (good for static conditions), fish eye lenses (which presents problems in obtaining a single center of projection) and catadioptric systems (which incorporates reflecting mirrors and in different cases appeared to be successful). Hyperbolic and parabolic catadioptric systems re both able to capture at least a hemisphere of viewing directions about a single point of view; knowing the geometry of these systems and applying spherical representation with the use of Jacobian transformations, it is possible to map the image velocity vectors.
·       Model
The motion field is defined as the projection of 3D velocity vectors onto a 2D surface, the rigid motion of a scene point P relative to a moving camera is then defined as: P’=-T-Ω X P, where Pˆ is the projection of scene P onto the sphere; the next step is then to derive the projection function with respect to time and substitute it in the equation mentioned above, obtaining therefore U(Pˆ), the velocity vector. The ego-motion problem is regarding the estimation of Ω, T and Ui, for this 3 algorithms are mentioned in the literature: Bruss and Horn, Zhuang et al., Jepson and Heeger (please refer to page 1001 and 1002 for the algorithms). In order to map motion in the image on the sphere we need to make the transformation of the image points on the sphere and use the Jacobian to map the image velocities. Coordinate θ is for the polar angle between z-axis and the incoming ray and φ is the azimuth angle, while x and y are the rectangular coordinates of the system with center in the origin at the center of image. Since the image plane is parallel to the x-y plane φ is to be considered always the same for all sensors ( φ-arctan(y/x) ), while the polar angle changes according is in use the parabolic omnidirectional system, the hyperbolic omnidirectional camera or the fish-eye lenses (which doesn’t have a single center of projection and introduces small errors). Finally U=SJ[dx/dt  dy/dt]T, where S is the transformation on the sphere and J the Jacobian matrix, each sensor has its own Jacobian matrix.
Key Concepts
Robot vision, Computer Vision, Omnidirectional Cameras, Ego-Motion
Key Results
The camera with the three model coming from the literature has been tested, showing that the non-linear algorithm (Bruss and Horn) is more accurate and more stable than the linear algorithms. The authors with this paper proved that these algorithms, although originally design for planar perspective cameras, can be adapted to omnidirectional cameras by mapping the optic flow field to a sphere via the use of an appropriate Jacobian matrix.

Saturday, September 29, 2012

Adaptive Neural Network Control of Robot Manipulators in Task Space


Shuzhi S. Ge, C.C. Huang, L.C. Woon
IEEE Transactions on Industrial Electronics, VOL.44, NO.6, December 1997
Summary
                  Flexibility is one of the main issues in a production facility and several researches have been done to enable it in control system. Computed Torque Control is an intuitive scheme which has the objective of cancelling non linear dynamics of the manipulator system, but it requires the exact dynamic system which mean that is wouldn’t be flexible enough, so this is why adaptive control methods have been searched to overcome the problem of requiring a priori knowledge. Interesting is the application of Neural Networks, they work well in many systems and if use with parameterized variables, they can be used in different environments. The problem is extended I Task Space (Cartesian Space) and controlled must be the end effector position and it’s force. In order to create the model the GL (Ge-Lee) Matrix and its operator are introduced (page 746-747).
·       Model
In the field of control engineering, neural networks are used for approximating a given non linear function f(y) up to a small error tolerance. The neural network is based on 3 layers: input layer (n nodes), hidden layer (l nodes) and output layer (m nodes). For each hidden layer a Gaussian function is defined: ai=exp(-(y-μi)T(y-yi)/σi2), with μ being the center vector and σ2 being the variance of the gaussian distibution. The output appears to be the Gaussian Functin coming out from the hidden layer weigheted by W. It has been proven that any continuos function can uniformly be approximated by a linear combination of Gaussians.
In modelling a robot’s manipulator the kinematics would be described in the following manner: D(q)q’’+C(q,q’)q’+G(q)=τ, where D os the symmetric positive definite inertia matrix, C is the Coriolis matrix, G is the vector for gravitational forces and τ is the joint torque vector. D(q) and G(q) are function of only q, therefore the are considered as static neural networks and the equation previously introduced can be adapt for their description (page 748), while C is described as a dynamic neural network and therefore q’ is needed to model it, a parameter z=[qTq’T]∈R2n is used. A general controller can then be constructed demonstrating that no Jacobian matrix is required (for the function please refer to page 749); Kr is introduced to give the proportional derivative type control and kssgn(r) is indicating the tracking error. In a closed loop system with positive derivative and tracking signal error, e and e’ tend to be stably 0 with t going to infinite, the presence of sgn(.) function denotates chattering which needs to be minimized.
Key Concepts
Artificial Neural Networks, Control Systems
Key Results
The model of a Neural Network for robot controlling has been applied in simulation in comparison with a basic PD controller (non adaptive control). The simulation involved a two link manipulator and vector M, P and L where introduced. The vector M is: M=P+plL, where P is the vector for payloads, pl is payload at lth link and L is [l21 l22 lll2 l1 l2]T. For D and G function a 100-node static neural network is used, for C we use a 200-node dynamic neural network.
In the simulation non adaptive control appears to have a significant tracking error and cannot handle changes, using adaptive control allows to reduce a lot the tracking error thanks to the learning mechanism which is provided by the neural network methodology.
Actual D and actual C are shown to not converge to optimal D and C, which actual G converges to its optimal value, this is due to the fact that the desired trajectory is not persistently exciting as for real-world applications.
In conclusion the model allows a good control system without the need of time-consuming computations for obtaining the Jacobian, which describes the necessary inverce kinematics for traditional control systems.

Thursday, September 27, 2012

Theory and Application of Neural Networks for Industrial Networks for Industrial Control Systems


Toshio Fukuda, Takanori Shibata
IEEE Transactions on Industrial Electronics, Vol.39,No 6, December 1992
Summary
                  Human’s brain is composed by about 1010 neurons which allow the brain to be capable of having the following unique characteristics: parallel processing of information, learning function, self-organization capabilities, associative memory and is good for information processing. Ideally researched want to obtain a similar decision making approach and control system for robots, so that artificial neural network basically is a connection of many linear and non linear neuron models and where information is processed in a parallel manner. In the literature there have been approaches in obtaining «mindlike» machines (based on the idea of interconnecting models in the same manner as biological neurons), cybernetics (for which main principles see relationship between engineering principle, feedback and brain function) and the idea of “manufacturing a learning machine”. The literature goes further on in covering recognition system arriving to the Hopfield Net (a series of first-order non-linear differentiable equations that minimize a certain energy function).
·       Models
Each neuron’s output is obtained by summing all the inputs in the same neuron, subtracting the bias and considering a weigh effect on each input. The net is classified in two categories: recurrent net (for which multiple neurons are interconnected) and feed-forward net (which presents a hierarchical structure).
The Hopfield Net is a recurrent net, which has the capability of providing feedback paths, basically with the aim of stabilizing a certain potential field. For the purpose the state of the system and the potential field are used (refer to page 475 for the formulas), the system tends to equilibrium at infinity. This kind of setup allows parallel operation and it’s a case of associative memory (as for the system tends to move to equilibrium points).
Neural Networks are applied in different fields, as for the Travelling Salesman Problem, for which though the only suboptimal solution can be obtained (since the Hopfield transition is based on the least mean algorithm and this may stack to local minimum cases. The Boltzmann Machine is another case of application, for which each neuron is operating with a certain probability, so it also can minimize the energy function as for the Hopfield Net. Further implementations include the Feedforward Neural Network, it’s a back-propagation technique, meaning that is uses gradient search to minimize the error computer as the mean difference between the desired output and the actual one. Once the Back-Propagation is consisting basically the learning face, therefore the overall system first uses the input vector to produce its own output vector, then it computer the difference between the desired output and actual one and adjusts the weights according to the Delta Rule. The initial weights are have to be initialized and random small values are used (for the back propagation algorithm please refer to page 478).
Adaptive Critic appears to be an extended method for learning applications through associative search element and single adaptive critic element (the first being the action network and the latter being the critic network, having as an output a reward or punishment for the first network. Learning method can be offline (carrying unnecessary training), online (problem in initialization), or feedback error learning (which has the issue of lacking of knowledge of the system).
Key Concepts
Artificial Neural Networks, Backpropagation, Delta rule, Robot Learning
Key Results
Neural Networks are applied in vision and speech recognition, design and planning, application control (supervised control, where sensors input information, inverse control, which learns inverse dynamics of a system and neural adaptive control, in order to predict future outputs of a system), knowledge processing (where databases can be use also for initialization and for supervising the net). 

Tuesday, September 25, 2012

A Systematic Approach to Predict Performance of Human-Automation System


IEEE Transactions on Systems, Man, and Cybernetics – Part C: Applications and Reviews, Vol.37, NO.4, July 2007
Summary
                  One of the key issues in human-robot interaction is which tasks is better to be performed by humans, by robots or by a combination of the two and with which level of cooperation. The paper discusses an approach for predicting system performance resulting from humans and robots preforming repetitive tasks in a collaborative  manner. Two are the main factors affecting performance: the approximation of the decline in performance associated with the constant mental/resource load required to complete a task and the quantification of how well an agent achieves a certain task. It is known that automation systems are highly discouraged in presence of unexpected events or uncertainty, where cooperation with human can improve the overall performance of the system. The author introduces HumAns (Human Automantion system performance), that evaluates the effects of workload on human performance, and estimates performance derived from task allocation. The procedure to be followed consists of 4 steps: 1) decompose the scenario in primitive major functional tasks; 2) estimate the performance of both human and robot; 3) calculate the performance score based on satisfaction and effects on both human and robot; 4) compute a composite task score to enable tradeoff studies in order to allocate tasks between humans and robots.
The method used for scenario decomposition is the task diagram interview sequence (part of the applied cognitive task, ACTA, technique), so that the scenario is broken into 3 to 6 functional primitives, then the cognitive skill/mental demand are derived (through a classification which creates 3 macro-levels: perception, cognition and motor activities) and a task diagram can then give an overview. The primitives are selected independently from each other to emphasize different aspects. Performance metrics are measured on workload values and execution time components, the first is relative to decline in performance associated with mental/resource load required for task completion. The performance metrics are performed for each primitive found previously. ACT-R is the framework used for modeling human cognition in this case (it studies how it works in different scenarios, based on the assumption that come from psychology experiments). At this point for each primitive there is a time of execution and workload, so these further steps can be done: 1) calculate the execution rank for each of the three zones in which the data (execution time vs. ranking value) is divided (a logarithmic function, given at page 597); 2) calculate the Workload considering it distributed logarithmically and dependent on the Execution Rank (page 597); 3) calculating the Composite Task Score. In order to obtain the Composite Task Score the Markov Decision Process (for which each agent, either robot or human, is allowed to chose individual actions based on maximizing an optimization function for the entire system, this incorporates already both workload ranking values and execution ranking times, the algorithm is introduced briefly in the paper (page 598).
Key Concepts
Human-Robot Cooperation, Performances of Cooperation
Key Results
Two different setups have been done testing tele-operation control and fully autonomous control. HumAns is denoting how as time elapses the task score decreased, while for a fully automated robot the scores stays fairly constant, in the comparison we could say that for short elapsing times teleoperation is more convenient than full automation, which is not true in the long and repetitive case. HumAns appears to be a very important and useful method to predict workload on human and machine and the consecutive effects in the overall system, still studies on the effects on accuracy and repeatability have to be performed.

Whose Job is it anyway? A Study of Human-Robot Interaction in a Collaborative Task


Pamela J. Hinds, Teresa L. Roberts, Hank Jones
Human-Computer Interaction, Volume 19, 2004
Summary
                  Human Robot cooperation is growing more and more and researches have supposed that humans may prefer working with human-like robots than machine-like, although, according to the authors, no test has been down up to the paper’s date (2004). The paper researches links with human likeness, status (subordinate, peer or supervisor) and dimensions. Today researches are divided mainly in two “team”, according to Brooks [2002], humanoids will have better communication chances than machine-like robots, while opponents believe that humanoid features may result in unrealistic expectations and in some cases even fear. In this research the case of underreliance is faced, being proved (Gawande, 2002) that people tend to resist technologies that are programmed to augment human decision making. Another aspect covered in this research is the level of responsibility that people assume for a certain task in certain conditions and with a certain robot cooperator.
The authors performed statistical test on 5 hypothesis: 1a) People rely on human-like robot partner more than a machine-like one; 1b) People will feel less responsible for the task when collaborating with a human like robot partner than a machine-like one; 2a) People will rely on the robot partner more when its characterized as a supervisor than when it is characterized as a subordinate; 2b) People will feel less responsible for a task when collaborating with a robot partner who is a supervisor than with a robot partner who is a subordinate or a peer; 3) People will feel the greatest amount of responsibility when collaborating with a machine-like robot subordinates as compared with machine-like robot subordinated. To test the tree hypothesis the researchers performed experiments to verify human likeness and status influence in human perception, the robot was operating in Wizard of Oz conditions (teleoperated) without the people performing been told.
The experiments have been performed with a the same robot, once wearing human-like features such as nose, ears, mouth and eyes been demonstrated (Di Salvo, Gemperle, Forlizzi and Kiesler, 2002) that there are the characteristics that most affect perception of human-likeness; the status has been previously communicated to the testers through written instruction (as been successfully done previously by Sande, 1986).
The experiment analyzed, through videotapes analysis, the attribution of credit and blame, specially using the concept of shared social identity analyzing the language used by the testers while working together with the robot.
Key Concepts
Human-Robot Cooperation, Team-working, Humanoids, Robot impact on humans
Key Results
The results have shown multiple aspects, first of all, not unexpected is the preference humans have in working with other humans rather than robots, but the difference regarding responsibility, attribution of blame and attribution of credit appears to be not statistically significant, as for the difference between human-like robot and machine-like robot. Hypothesis 1a and 1b appear therefore to be confirmed. It is interesting to notice how users tend communicated more with machine-like robots, since people perceive less common ground between themselves and the robot (Fussel & Krauss, 1992). Also it has been proved that people relied more on a peer robot than a subordinate or supervisor robot (when the robot is a supervisor then humans tend to blame the mistakes and attribute to themselves the success) and people feel much more responsible for the task when cooperating with a machine-like robot. This results suggests that the appearance of the robot is important according on the degree of responsibility required, when it’s needed to have more options then it would be better to have a machine-like robot (Robert et al., 1994), in the case of high hazardous environment and risk then humanoids may be a good choice so that people may delegate easily responsibilities to them.