Follow me

Saturday, October 6, 2012

A study of neural network based inverse kinematics solution for a three joint robot


Rasit Koker, Camil Oz, Tarik Cakar, Huseyin Ekiz
Robotics and Autonomous Systems, No.49, 2004

Summary
                  The inverse kinematic problem for a robotic manipulator is about obtaining the required joint values for a given desired end point position and orientation. There are mainly three approaches for solving the problem: geometric, algebraic and iterative.
The algebraic solution has the problem of not guaranteeing closed form solutions, the iterative method instead converges to only a single solution and this depends on the starting point, while the geometric method can guarantee closed form solution only if the first three joints of the manipulator exist geometrically.
Artificial Neural Networks (ANN) have instead the advantage of being flexible and adaptable and useful specially when shorter calculation times are required, such as in real-time operations.
Direct kinematics is when given a joint angle vector at time t and the geometric parameters (with n d.o.f.) the position and the orientation must be found, therefore the inverse kinematic problem is finding the joint angle vector. ANN is a parallel-distributed information processing system, operators are connected via one way signal flow channels. The ANN stores samples with a distributed coding, forming a trainable non-linear system. As a learning algorithm the Back-propagation is used, consisting of a network of input, hidden and output layer (with no constraints on how many hidden layers there should be) with strong mathematical foundation based on gradient descent learning. NET values, the values in the perceptrons (neurons) in the hidden layers, are obtained multiplying input with weights, considering j the node on the right side of i-th node and k the node on the right side of j-th node, then: NETj=1-3=ΣIiWi,j+WBjNEW and OUTj=1-3=1/(1+e-NETj), where OUTk refers  to the output value of the neuron k after activation  and NETj refers to the net interval activity of neuron j. Similarly can be done on k nodes for the outputs of the network.
The different between target point and actual output of the network is the main core of the Back-propagation algorithm, in fact through the concept of error propagation the network is capable of improving itself through learning. We can compute the error gradient of neuron k from the output layer to the hidden layer as δk = OUTk(1 - OUTk)(TARGETk – OUTk) and the change in related weight as ΔWkj(n+1) = ηδkOUTk + α[ΔWji(n)] through which we can calculate WjiNEW = WjiOLD + ΔWji(n+1).
Once this computation is performed then similarly calculations are done from the hidden layer to the input layer as follows: δj = OUTj(1 – OUTj)ΣδkWkj and the change in related weight as ΔWji(n+1) = ηδjOUTj + α[ΔWkj(n)] through which we can calculate WkjNEW = WkjOLD + ΔWkj(n+1). The bias affect the function forcing the learning process to be faster, bias for the output layer are: ΔWBk(n+1) = ηδk + α[ΔWBk(n)] so that WBkNEW = WBkOLD + ΔWBk(n+1) and similar for the hidden layer ΔWBj(n+1) = ηδj + α[ΔWBj(n)] so that WBjNEW = WBjOLD + ΔWBj(n+1).
So as it can be seen the training is first carried through a forward pass, where calculation of NET and OUT are done, and the second is the backward pass, where error propagation throughout connection weights is performed. The process is iterative and repeated until the different between the TARGET and OUT are minimized to a decided value.
Key Concepts
Artificial Neural Network, Inverse Kinematics
Key Results
A cubic trajectory is used for describing the trajectory and is written as follows: θ(t)=a0+a1t+a2t2+a3t3, from which speed and acceleration can be obtained. An ANN is used obtaining in the end an error of 0.000121, the number of perceptrons (40 hidden, 3 input and 3 output) together with the learning and momentum rate (η and α) are decided experimentally. 

No comments: