Toshio Fukuda, Takanori Shibata
IEEE Transactions on Industrial
Electronics, Vol.39,No 6, December 1992
Summary
Human’s brain is composed by about 1010
neurons which allow the brain to be capable of having the following unique
characteristics: parallel processing of information, learning function,
self-organization capabilities, associative memory and is good for information
processing. Ideally researched want to obtain a similar decision making
approach and control system for robots, so that artificial neural network basically is a connection of many linear
and non linear neuron models and where information is processed in a parallel
manner. In the literature there have been approaches in obtaining «mindlike»
machines (based on the idea of interconnecting models in the same manner as
biological neurons), cybernetics (for which main principles see relationship
between engineering principle, feedback and brain function) and the idea of
“manufacturing a learning machine”. The literature goes further on in covering
recognition system arriving to the Hopfield
Net (a series of first-order non-linear differentiable equations that
minimize a certain energy function).
·
Models
Each neuron’s output is
obtained by summing all the inputs in the same neuron, subtracting the bias and
considering a weigh effect on each input. The net is classified in two
categories: recurrent net (for which multiple neurons are interconnected) and
feed-forward net (which presents a hierarchical structure).
The Hopfield Net is a recurrent net, which has the capability of
providing feedback paths, basically with the aim of stabilizing a certain
potential field. For the purpose the state of the system and the potential
field are used (refer to page 475 for the formulas), the system tends to
equilibrium at infinity. This kind of setup allows parallel operation and it’s
a case of associative memory (as for the system tends to move to equilibrium
points).
Neural Networks are
applied in different fields, as for the Travelling
Salesman Problem, for which though the only suboptimal solution can be
obtained (since the Hopfield transition is based on the least mean algorithm
and this may stack to local minimum cases. The Boltzmann Machine is another case of application, for which each
neuron is operating with a certain probability, so it also can minimize the
energy function as for the Hopfield Net.
Further implementations include the Feedforward
Neural Network, it’s a back-propagation technique, meaning that is uses
gradient search to minimize the error computer as the mean difference between
the desired output and the actual one. Once the Back-Propagation is consisting
basically the learning face, therefore the overall system first uses the input
vector to produce its own output vector, then it computer the difference
between the desired output and actual one and adjusts the weights according to
the Delta Rule. The initial weights
are have to be initialized and random small values are used (for the back
propagation algorithm please refer to page 478).
Adaptive Critic appears to be an extended method for learning applications through
associative search element and single adaptive critic element (the first being
the action network and the latter being the critic network, having as an output
a reward or punishment for the first network. Learning method can be offline
(carrying unnecessary training), online (problem in initialization), or
feedback error learning (which has the issue of lacking of knowledge of the
system).
Key
Concepts
Artificial Neural
Networks, Backpropagation, Delta rule, Robot Learning
Key Results
Neural Networks are
applied in vision and speech recognition, design and planning, application
control (supervised control, where
sensors input information, inverse
control, which learns inverse dynamics of a system and neural adaptive control, in order to predict future outputs of a
system), knowledge processing (where databases can be use also for
initialization and for supervising the net).
No comments:
Post a Comment