Follow me

Thursday, October 11, 2012

Robotics and Autonomous Systems


Eduardo Ianez, José Maria Azorin, Andrés Ubeda, José Manuel Ferrandez, Eduardo Fernandez
Robotics and Autonomous Systems, 58, 2010
Summary
                  Brain Computer Interface (BCI) methods allow operators to generate control commands through EEG signals (Electroencephalography), through the registration of “mental tasks”.
Brain activity product electrical activity, magnetic signals (measured through magnetoencephalography, MEG) electrical signals and metabolic signals (measured through the Positron Emission Tomography, PET, or by Functional Magnetic Resonance Imaging, fMRI). Techniques are divided in invasive and evasive, in the first case microelectrodes are implanted directly in the brain (used for animals and considered to be more precise), while in the second case are only having cutaneous contact (used for human application being ethically preferred).
BCI can further be divided in synchronous and asynchronous, the first when check the EEG and take the decision, taking a certain amount of time, the second allowing actions to be thought any time and therefore it’s chosen by the authors. The proposed BCI uses Wavelet Transform to extract the relevant characteristics of the EEG, with a new Linear Discriminant Analysis (LDA) for classifying the different EEG patterns.
The experiments requires 16 scalp electrodes, with 9 concentrated on the top center, the rest are supporting preprocessing. The information arriving from the Wavelet Transform is then processed by the LDA classifier, which is based on Fisher linear discriminant, using statistics and automatic learning to find the best linear combination, the method guarantees the maximum separability between classes, but four different models are required for simultaneous classification among the three different classes (rest state, right mental tasks and left mental tasks). Model 1 implies class 1 being Right side and class 2 left side and rest state, model 2 has class one being the left side and class 2 being the right and rest state, model 3 has class 1 being the rest state and class two the right and left state and model 4 has class 1 being the right state and class two the left state.
For the four models then a decision system is created according on to the following rule: for the models from 1 to 3 (task involving the brain’s right side) one point is assigned if the vector belongs to class 1, 0.5 if it belongs to class 2 and 0 in the uncertainty region; for region 4 one point is assigned either if the vector belongs to class 1 or 2 and 0 if in uncertainty (the algorithm is shown at page 1251).
A test has been performed, experimenting simulation offline (moving a dot), online and practice online (moving a FANUC LR Mate 200iB robotic arm with 6 d.o.f.).
In the offline more errors are tracked and later on, on the same session, users performed other 50s activity online, the third session see the use of the robotic arm obtaining final scores with the method previously introduced.
The offline results show clearly that it is the worst way of performing tasks, it results in greater errors and success percentage is therefore low, since the user doesn’t receive any feedback from the system
The online method, which uses Matlab interface, appears to be more effective with less error reported, the problem is that EEG signals are time invariant, reason for which there are wrong adjustments of data at initial calculations. It has been demonstrated that through training the final results achieved are much better.
The robot arm, programmed using C++, appears to be controlled well, having the same problems of the online simulation, being the EEG time invariant.
Key Concepts
EEG control systems
Key Results
The paper has introduced the different steps involving BCI with Wavelet Transform, LDA-based classifier and in the end the algorithm for decision making. For future implementations the time invariance of EEG signals has to be adjusted and more degrees of freedom should be added to command the robotic arm. The method proposed it thought mainly for people with some physical disabilities. 

A helmet mounted display system with active gaze control for visual telepresence


Julian P. Brooker, Paul M. Sharkey, John P. Wann, Annaliese M. Ploy
Mechatronics, 9, 1999
Summary
Teleoperation has an interesting potential, enabling human operators to perform delicate physical manipulation without the requirement of physically being present where the task is done, therefore it becomes fundamental to provide an obstructed and natural viewpoint, in order obtain high performance and safety.
There is scientific proof that proprioceptive information for hand localization may be biased by the stereo-vergence typical of humans, therefore two possible solutions maybe possible: using a stereographic display or using a helmet mounted display containing separate lightweight image displays for each eye.
Using an helmet display has the advantage of allowing the user to gaze around a scene without the use of head movement being limited, while problems, in general for Teleoperation, are related to low LCD screen resolution.
In designing a head mounted display device, variables have to be kept into account: 1) the inter-camera distance (ICD) and the inter-display distance (IDD) which have to be matched to the observer’s inter-pupil distance (IPD); 2) the field of view (FOV) of the camera configuration has to be the same of the FOV of the display configuration.
In telepresence environments high-quality binocular image of element in the near viewing field is more important than a wide angled view of the distant viewing field, therefore the FOV must be reduced together with the percentage of the overlap occurring on an object in the near viewing field also decreases.
To obtain this result cameras have to verge, a problem occurring in verging the two mounted cameras is that the object viewed tends to be altered in humans view. We can computer d (minimum distance at which the target is viewed) as: d = (p+w)/(2tan(a/2)), where w is the target’s width, p is the IPD of the operator and a the horizontal angle of view.
Solutions presented in the case of camera verging could be involving tracking operator’s eyes so that the camera geometry could be optimized to the situation, but the problem would be moving screens and cameras synchronously, which involved multiple problems.
A solution may be using electronic image translation, this would avoid use of mechanical moving parts, horizontal image detection could be achieved with minimal additional image processing hardware and the translation system is likely to be higher performing than a mechanical one; the main problem would be that images may appear deformed and LCD display may have low resolution, leading the authors to finally opt for a mechanical display positioning.
Using a mechanical system it has to be kept into account that displays have to face directly the pupil and therefore rotate with it, infrared sensor are used for this purpose. Potential problem are related to small movement the eye perform such as flicks, drifts and tremors, it is demonstrated that a system capable of tracking all this movements appears to be too sensible and therefore not being able to keep a proper and sufficient track of the system.
The eye tracking apparatus appears for each eye is located behind the half silvered mirror, for the prototype make by the authors in this paper, the problem of size of the overall system hasn’t been touched.
The control algorithm works in an iterative process to tune the parameters of the PID controller, in order to give damped responses on the camera axis.
Key Concepts
Virtual Reality
Key Results
The system has been tested and allows to perform teleoperations with good quality and results, IDD and IPD have to be calibrated to match the operator as well as the rotational centers.

Wearable Obstacle Avoidance Electronic Travel Aids for Blind: A Survey


Dimitrios Dakopoulos, Nikolaos G.Bourbakis
IEEE Transaction on Systems, Man, and Cybernetics – Part C: Applications and Reviews, Vol. 40, No.1, 2010
Summary
The paper is describing different technologies for visually impaired people, trying to make a comparison analysis on which system appears to be the most complete.
The systems are generally categorized according on their objective: vision enhancement, vision replacement and vision substitution, the paper focuses on this last aspect. Vision substitution technologies are to be considered in three subcategories: electronic travel aids, electronic orientation aids and position devices, the authors focus mainly on the first ones without usage of GPS.
Echolocation appear to be one of the first methods, it uses two ultrasonic sensors and converts the information to stereo audible sound sent through earphones.
Navbelt is another system using also ultrasonic sensors, creating a map of the angles and the distance of any object within this angle, it has two modes, a guidance and an image mode.
Another method is vOICE, for which images captured by a camera are converted into sound-mapping and communicated through headphones, it has the advantage of being small, light and relatively cheap.
The University of Stuttgart Project is working using a sensor module with a detachable cane cameras and a keyboard, all connected to a portable computer, a digital compass, a 3D inclinator and a loudspeaker. The computer contains software for color, distance and size detection and is able to work wirelessly, the main issue is related to the fact that the technology still appears to have limitations.
The FIU project is a sonar and compass unit with six ultrasonic range sensors pointing in the six radial directions and uses a 3D sound rendering machine for communicating to the user, in this case the navigation speed appears to be slow.
The Virtual Acoustic Space is a stuy performing a sound map environment consisting of two color microcameras attached to the frme of some conventional eyeglasses, a processor and headphone, it appears to be convenient for the size.
The Navigation Assistance for Visually Impaired (NAVI) consists of a video camera, a single board processing system, batteries and a vest; the idea behind is that humans focus on objects that are in front of the center of vision and so it is important to distinguish between background and obstacles, so the video uses a fuzzy learning vector quantization neural network for classifying the pixels background, the pixels are then enhanced and the background suppressed, finally information is processed into left and right parts and transformed to stereo sound. The university of Guelph proposed a project to convert vision information into a tactile glove, where tactile vibration communicates to each finger, according on where the obstacle is coming from.
Guidecan is also a project in which a common cane for blind is enhanced with sensors (there are also different commercial products going in this direction).
The Electon-Neural Vision System has the characteristic of being able to communicate to the human through neural stimulation, but its equipment has to hold in order to achieve results.
Other tactile systems have been developed, such as Tactile Handle and Tactile Vision System, the first requiring excessing training and no free-hand conditions.
Further own, Tyflos is a system born in the mid-90s for integrating different technologies.
Key Concepts
Vision System for Vision impaired people.
Key Results
No method appeared yet to be satisfying free-hand, free ears, wearability and simplicity requirements.

Safety issues in modern applications of robots


S.P. Gaskill, S.R.G. Went
Reliability Engineering and System Safety, 53, 1996
Summary
Industrial Robot design and selection is today closely related with the robot's final application and its integration with the other machines, several hazard aspects are involved due to movements and high energy.
One extreme difficulty in taking decisions related to safety is the fact that all robots use programmable electronic systems (defined with the acronym PES), which are complex to evaluate. Programmable Electronic Systems are reliable and offer a wide functionality, wider than hard wired control systems, however they are complex and it may be impossible to give any prediction regarding safety, since systematic failures, especially software faults, could cause unexpected actions from the device.
For this reason in EN60204-1:1993 sub clause 12.3.5, 'Safety of Machinery - Electrical Equipment of Machines: Part 1 "General Requirement"', there is a preference for hard-wired electro-mechanical components for emergency stop functions; where this is not possible other measures should be used (for example self diagnostic checking features). On June 14th 1989 the European Community signed the Machinery Directive (in UK it is known as "The Supply of Machinery Regulations 1992"), which today is the basic directive in the robotic field, requiring that certain technical documentation, including the technical construction file, must be available any time for inspection.
At present every robot maker has to provide the proper documentation, including conformity about the integration in the cell where the robot is installed, also all documents must be available within at least 10 year from the first machine's operation.
The European Law in the mentioned regulation is providing an appendix, the EHSR (Essential Health and Safety Requirements), where general terms are reported for operators’ safety. Currently the EN775:1992 (ISO 10219) ‘Manipulating Industrial Robots – Safety’, bases technical measures for preventions of accidents on two principles: 1)Absence of people in the safeguard space during automatic operation; 2) the elimination of hazards or at least their reduction during interventions (e.g., teaching program verification) in the safeguards space.
Acceptable levels of safety are regulated by IEC1508 'Functional safety: safety-related systems', which tends to minimize risk, but still  today tests and trials are not enough to prove the safety of a robots and therefore a risk based, quality approach throughout the lifecycle of the machine is performed.
In the international standard IEC1508, systematic errors are taken under consideration under the concept of "target safety integrity levels", which are chosen according to the amount of risk reduction attributed to the related system in order to reduce the overall risk to a tolerable level.
In the European Union law there are primarily two ways to conform the technical measure required:
Designer are assisted by the standard EN292, ‘Safety of Machinery - Basic Concepts for design’, interpreting EHSRs (Essential Health and Safety Requirements) guiding the producer through: determining the boundary of the system (space, time), identifying and describing the nature and consequences of the constraints of the system (specially hazards related with human robot interaction during the life cycle of the robot), assigning a risk level for each possible hazard and finally ensuring that safety is adequate.
Safeguarding is regulated by EN775 'Manipulating Industrial Robots - Safety" where prevention of accidents is based on two fundamental principles: absence of people in the safeguard space during automatic operations; elimination of hazards or at least their reduction during interventions (such as teaching program verification) in the safeguard space. The standard, based on different steps, uses safety lifecycle as a key framework defining: objectives to be achieved, requirements to meet the objective, the scope of each phase, the required inputs for each phase and the deliverable to comply with the requirements.
Key Concepts
Standards and Regulations

Wednesday, October 10, 2012

Switching Between Collaboration Levels in a Human-Robot Target Recognition System


Itshak Tkach, Avital Bechar, Yael Edan
IEEE Transactions on Systems, man and cybernetics – Part C: Applications and Reviews. Vol. 41, No.6, November 2011

Summary
                  Human operator (HO) excel in recognition capabilities, therefore a combined system robot and human working together, may work together well in this kind of tasks. Bechar et al. previously defined four human-robot collaboration levels for target recognition tasks in unstructured environments, based on the assumption that integration of an HO in a robotic system tends to reduce complexity of the robotic system. Performances in a human-robot environment are mainly affected by: the state of human, the environmental conditions and the system parameters. According on different conditions of the working environment, it might be good to have the possibility of switching between collaboration levels, as proposed in this paper, proposing a logical controller that considers important parameters to maintain maximum performance. The human-robot environment is defined as a system composed by the HO subsystem, where a human perceived information from a display, the robot subsystem is comprising the autonomous operations that are defined through programming. The HO has the possibility of deciding how and when to intervene in the current work state in any collaboration level (which, from Bechar et al. are: the human operator detect and marks targets solely, the human operator supervises the robot, the human operator completes robot’s detections and the robot acts autonomously). The objective function is expressed through the operation cost: VIS=VHS+VMS+VFAS+VCRS+VTS  (where VFAS and VHS are penalties, VHS is the gain from detecting targets, VMS is the cost of missed targets, VCRS is the benefit gained from correct rejection and VTS is the cost of time and actions). The terms VFAS and VHS have negative values (they are penalties), so the idea is improving them through technology characteristics enhancements, other terms are estimated through their probabilities. The system time is computed as a superposition of all the possible time probabilities (time for the human to confirm robots hits, time for the human to detect additional targets, time for the human to correct the robot false alarms, time for the human to mark false alarms, time for the robot to process the image and to achieve hits or false alarms). The time computation appears to be important for the calculation of VTS. The parameters used for the calculation of the objective function are divided in 4 categories: human performance parameters, robot performance parameters, task performance parameters, environmental performance parameters. The controller is designed so that it is capable of switching between collaboration levels to provide optimal collaboration level, the human-robot system appears then to be a closed loop system with logical controller, where through the score of the optimal collaboration level (OCL), compared with the current collaboration level (CCL), it is able of apply the change and provides a manipulated input to the process u(t). Assumptions to the system are considering human not influencing robots, human and robot not influencing the target, the new image sampling after target achieved, noise and signals with the same distribution and system inputs achieved prior to each image sample, so that the switching objective function is VIS= (VISoptimal- VIScurrent) + tresponse x Vt + Ψ x Vp , where Vp is the penality to switch earlier than the nominal value of the switching frequency and Ψ is the deviation value from the switching frequency and Vt is the penality for the system’s response time.
Different switching algorithms are proposed (page 961-062).
Key Concepts
Team Working, Human Robot Cooperation
Key Results
Dynamic switching in 100% conditions may not increase necessary performances, global optimum may not be achieved because of influence of local optimums, but it works under different probabilities and greatly increases the system performance in most of the simulated scenarios.