Follow me

Showing posts with label human-robot interaction. Show all posts
Showing posts with label human-robot interaction. Show all posts

Thursday, October 11, 2012

Robotics and Autonomous Systems


Eduardo Ianez, José Maria Azorin, Andrés Ubeda, José Manuel Ferrandez, Eduardo Fernandez
Robotics and Autonomous Systems, 58, 2010
Summary
                  Brain Computer Interface (BCI) methods allow operators to generate control commands through EEG signals (Electroencephalography), through the registration of “mental tasks”.
Brain activity product electrical activity, magnetic signals (measured through magnetoencephalography, MEG) electrical signals and metabolic signals (measured through the Positron Emission Tomography, PET, or by Functional Magnetic Resonance Imaging, fMRI). Techniques are divided in invasive and evasive, in the first case microelectrodes are implanted directly in the brain (used for animals and considered to be more precise), while in the second case are only having cutaneous contact (used for human application being ethically preferred).
BCI can further be divided in synchronous and asynchronous, the first when check the EEG and take the decision, taking a certain amount of time, the second allowing actions to be thought any time and therefore it’s chosen by the authors. The proposed BCI uses Wavelet Transform to extract the relevant characteristics of the EEG, with a new Linear Discriminant Analysis (LDA) for classifying the different EEG patterns.
The experiments requires 16 scalp electrodes, with 9 concentrated on the top center, the rest are supporting preprocessing. The information arriving from the Wavelet Transform is then processed by the LDA classifier, which is based on Fisher linear discriminant, using statistics and automatic learning to find the best linear combination, the method guarantees the maximum separability between classes, but four different models are required for simultaneous classification among the three different classes (rest state, right mental tasks and left mental tasks). Model 1 implies class 1 being Right side and class 2 left side and rest state, model 2 has class one being the left side and class 2 being the right and rest state, model 3 has class 1 being the rest state and class two the right and left state and model 4 has class 1 being the right state and class two the left state.
For the four models then a decision system is created according on to the following rule: for the models from 1 to 3 (task involving the brain’s right side) one point is assigned if the vector belongs to class 1, 0.5 if it belongs to class 2 and 0 in the uncertainty region; for region 4 one point is assigned either if the vector belongs to class 1 or 2 and 0 if in uncertainty (the algorithm is shown at page 1251).
A test has been performed, experimenting simulation offline (moving a dot), online and practice online (moving a FANUC LR Mate 200iB robotic arm with 6 d.o.f.).
In the offline more errors are tracked and later on, on the same session, users performed other 50s activity online, the third session see the use of the robotic arm obtaining final scores with the method previously introduced.
The offline results show clearly that it is the worst way of performing tasks, it results in greater errors and success percentage is therefore low, since the user doesn’t receive any feedback from the system
The online method, which uses Matlab interface, appears to be more effective with less error reported, the problem is that EEG signals are time invariant, reason for which there are wrong adjustments of data at initial calculations. It has been demonstrated that through training the final results achieved are much better.
The robot arm, programmed using C++, appears to be controlled well, having the same problems of the online simulation, being the EEG time invariant.
Key Concepts
EEG control systems
Key Results
The paper has introduced the different steps involving BCI with Wavelet Transform, LDA-based classifier and in the end the algorithm for decision making. For future implementations the time invariance of EEG signals has to be adjusted and more degrees of freedom should be added to command the robotic arm. The method proposed it thought mainly for people with some physical disabilities. 

Monday, October 8, 2012

Human-robot cooperation control for installing heavy construction materials


Seung Yeol Lee, Kye Young Lee, Sang Heon Lee
Autonomous Robots, N0.22, 2007

Summary
                  Automation System and Robotic Construction is an issue which has came to discussion among researched due to the importance of safety, productivity, quality and work environment. The first robot applied in construction field was in 1983, followed by other examples of robots using pneumatic actuators and servo motors in hybrid solutions or in master-slave solutions. With the new trend of tall and dangerous buildings, advances in new material have be done, but still the process remains mainly assigned to manpower. An automation system (ASCI: Automation System for Curtain Wall Installation) has been developed for mechanized construction to enable simple and precise installations, specially in regard of safety improvement. Automation in construction faces some characteristics which make it much different from manufacturing, in fact rarely processes are highly repetitive, therefore the requirements for this system are: the robot must follow the operator in various works, a robot must share the work space with the operator, there must be coordination between the operator’s and the robot’s force and an intuitive operation method that reflect dexterity of an operator should be performed. Human robot cooperation has been an interest for many different fields of application, in the 60’s the department of defence enhanced the capability of carrying heavy material with “Suit of Armor” and similar ideas where carried later on.  The paper introduces a robot control method for the installation of heavy construction material in cooperation with a human operator, basically the system allows the operator to handle heavy material by exerting operational force with a certain power assist ratio. Target dynamics model the interactions among human, robot and environment, an impendance controller which considers each environment contacting is performed considering two cases: constrained and unconstrained, which differ in the presence or not of environment contact. In unconstrained condition the operator handle heavy materials on an obstacle free area, the force is measured by a force (torque) sensor, Fh(T) is the force generated by the interaction between the operator and a heavy material, Mpt(Mot) and Bpt(Bot) are respectively the n x n positive definite diagonal inertia and damping matrices. The desired dynamics, given input Ff,(Th) is obtained by impedance, by means of λp and λo, respectively the power assist ration of the position and orientation. The Stiffness Matrices is K is to not be considered having the property of a spring.  Constrained condition differed in the presence in the impedance of Fe(Te), the experimental force (torque) and in this case, if contact occurs the generalized active impedance is considered. High stability is achieved s through damping, where too much damping may decrease mobility of the system so Mpt(Mot) and Bpt(Bot) are adjusted according to the requirements of the operator and appropriate values are found through simulation. Motion control is made stiffness so as enhance disturbance disturbance rejection, so a reference frame and a desired frame are introduced in the previous models. Experiment have been carried on requiring the operator to perform a circle applying force on the robot’s gripper.
Key Concepts
Human Robot Interaction, Human Robot Cooperation, Impedance Control, Control Systems
Key Results
The results show that and increase in Mpt(Mot) stability decreases (although the with the increase of Mpt an object can move to a long-distance place with small operational force), while the opposite is for Bpt(Bot), making it necessary to make a trade-off.
Also it is shown that a robot operation without the inner motion control (which balances the system) shows inferior desired position following performance in an unconstrained condition.
The force required by an operator gets smaller with higher values of the power ratio, but this won’t cause hang in Fe (force that reflect in the contacting condition.

Human-Assisted Virtual Environment Modeling


J.G. Wang, Y.F. Li
Autonomous Robots, N0.6, 1999

Summary
                  The paper proposes a man-machine interaction based on stereo-vision system. where the operator’s knowledge about the system is used as a guidance for modelling a 3D environment.
Virtual Environment (VE) modelling appears to be a key point in many robotic systems, specially in regard of tele-robotics. There have been many researches on how to build VE starting from vision sensors while exploring unknown environments and semi-automatic modelling with minimum human interaction. A good example of integrated robotic manipulator system using virtual reality (Chen and Trivedi, 1993, Trivedi and Chen, 1993) visualization to create advanced, flexible and intelligent user interfaces. An interactive modelling system was proposed in order to model remote physical environments through two CCD cameras, where edge information is used for stereo-matching and triangulation to extract shape information, but the system was constrained by the only motion of the camera on the Z axis.
The proposed system is performing in order that the operator can minimize the cues about the features and information the manipulator or mobile robot may encounter. The procedure followed sees first a local model build from different view point and later these local models composing a global model for the environment, once the environment has been constructed virtually, then the operator can fully concentrate in tele-operation.
Considering the use of two cameras, left and right, then two transformation matrices can be obtained: [HR] and [HL] these can be used for calculating W, the corresponding known image coordinate feature points of the 3D coordinate feature points. So in the end, assuming the 3D vector in W as [V3D] and the correspondent 2D vector [V2D], then [H]= [V2D] [V3D]T[[V3D] [V3D]T]-1 , where H can be decomposed in left and right matrices. Further on, if we assume [HR] and [HL] available, [X]=[x,y,z] of a feature in W can be calculated with its corresponding image coordinates [xa ya], [xb yb], so that [X]=[[A]T[A]]-1[A]T[B], where [A] and [B] are image coordinates.
A major difficulty though in stereo vision is the correspondence problem between the feature points in two images, due to poor robustness. A human operator can therefore identify objects in most of the scenes, prompting the vision system to locate and detect some object attributes or special corresponding feature so that the image coordinates would be deducted and the 3D position in W calculated.
A binocular stereo vision, after been guided by an operator to find some correspondent prompted feature, can be used to construct the local models of objects directly. The system work in recognizing primitive solids, from which is later possible to computer composite models. The authors introduce the cuboid (for which four points are detected) and the sphere (for which determination in a 3D space is obtained through the knowledge of radius and center), which through geometrical calculations and transformations can be obtained. Vertexes of objects are found through the intersection of corresponding lines, for other more complicated objects operator’s guidance can be used. In general only one point of view cannot successfully represent a 3D object, more than one is required and therefore Multi-Viewpoint Modelling is used. Therefore from two positions (for instance A and B) a transformation M-1 takes place, determining M rotation and translation are solved separately. If C and C’ represent the coordinate relationships between view point A and B, then C’=M’C and W=M’W’; after some computation M=[R T], with R rotational component and T translational component.
Key Concepts
Machine vision, Human Robot Interaction
Key Results
Performance can be studied either with different between point and their image or with the different between measured and real size objects. The system also work with insertion tasks with an error of 0.6 mm, in case of the need of a more precise system, force sensing would then be needed. Operators can use this methodology for observing real environment from any view points on the virtual reality system.

Information Sharing via Projection Function for Coexistence of Robot and Human


Yujin Wakita, Shigeoki Hirai, Takashi Suehiro, Toshio Hori
Autonomous Robots, N0.10, 2001

Summary
                  The authors introduce the concept of safety based on intelligent augmentation of robotic systems. In previous studies the authors introduced the concept of tele-robotic systems (1992,1995,1996), where a robot is operated from another position with no physical contact and monitored through a television, and intelligent monitoring (1992), a system allowing conveyance of only required information through selection of data. The expansion of this last system has been the snapshot function (1995), where a laser pointer helps in teaching mode to estimate the deviation of the position, while the operator can move the robot, teaching the estimated relative deviation. A further implementation is the here proposed projection function (2001), where a robot and human jointly operate through a Digital Desk, a special environment provided with a projector perpendicular to the working table and a speaker. The aim of this research is to achieve intelligent augmentation in order to prevent and avoid undesirable contact, information sharing is a fundamental aspect in cooperative tasks between a person and a robot (Wakita, 1998). The experiment test a human and robot operating in mainly 5 states (initial, approach, grasp, release and final), the main issue is this kind of problem to be solves are: the person does not know the delivery coordinate, the person must keep holding the object until it is released, the person might be frightened by the robot movement.
The projection function consists of projecting on the table the simulated images of the moving robot, so that the human operator knows in real time the robots trajectory and understand the delivery trajectory. Force sensors in the robot’s fingers are used in order to allow the robot understand when the object has been grasped by the operator. A new teaching method also is introduced: the operator activated the teaching mode by touching the robot’s hand, then, instead of physically moving the manipulator, the projected image of the robot follows the operator’s hand to destination, the advantage is that only the model is required and no robot movement; the robot confirm through the speakers that the teaching trajectory has been saved.
The force sensors are an efficient communication method only during grasping, visual monitoring appears to be necessary for the entire delivery task.
It can be observed that humans in cooperation require visual feedback in order to understand that their motion and activity has been understood, each person expects to be observed during their action. So visual information appears to be extremely important by means of perception and it enhance safety in the system.
The digital desks comes to help once again in monitoring and indicating robots and humans in the system, in fact while operating a symbol (in the experiment it is a white rectangle) is projected on the hand of the operator when the robot has detected an action, in this way the human is aware that the robot knows about its presence.
In order to perform the experiment, a CCD camera was used for detection of human’s hand and robot position, and a video projector (SANYO LP-SG60) mounted on the ceiling in parallel with the camera.
The system as programmed, projects a white rectangle on the human’s hand when the CCD and the computer had performed the detection, while stationary hand is recognized a the delivery position.
Key Concepts
Human-Robot Interaction, Human-Robot Cooperation, Team Working
Key Results
The experiment appears to be useful prompting the importance of communication between robots and humans working together, a communication which need also visual feedback in order to ensure safety. A big part of communication is in fact performed not only by direct communication, but also by indirect feedback, showing that the message has been properly received. Future research may require adding information to the system.

Saturday, October 6, 2012

Toward a Framework for a Human-Robot Interaction


Sebastian Thrun
Human-Computer Interaction, No.19, 2004

Summary
                  The field of robotics has undergone a considerable change from the time it first appeared as a complete science, robots now perform many assembly and transportation tasks, often equipped with minimal sensing and computing, slaved to perform a repetitive task. The future is more and more seeing the introduction of service robots and this is mainly thanks to reduce in costs of many technologies required and increase in autonomy capabilities.
Robotics appears to be a broad discipline and therefore definitions of this science are not unique, a general definition has been done by the author in a previous paper (Thrun, 2002) a system of robotic sensors, actuators and algorithms. The United Nations has categorized robotics in three fields: industrial robotics, professional service robotics and personal service robotics.
Industrial robotics are the earliest commercial success; an industrial robot operates manipulating its physical environment, it is computer controlled and operates in industrial settings (for example on conveyor belts).
Industrial robotics started in the 60s with the first commercial manipulator, the Unimate, later on in the 70s Nissan Corporation automated an entire assembly line with robots, starting a real “robotic revolution”, simply it can be considered that today the ration human to worker to robots is approximately 10:1 (the automotive industry is definitely the one with biggest application of robotics). However industrial robots are not intended to operate directly with humans.
Professional service robots are the younger kind of robots and are projected to assist people, perhaps in accessible environments or in tasks where speed and precision won’t definitely be met by human operators (as it is becoming more common in surgery).
Personal service robots posses today the highest expected growth rate, they are projected to assist people in domestic tasks and for recreational activities, often these robots are humanoids.
In all three of these fields two are the main drivers: cost and safety, these appear to be the challenges of robotics.
Autonomy refers to the ability the robot has to accommodate variation in the environment, it is a very important factory in human-robot interaction. Industrial robots are not considered to be highly autonomous, they often are called for repetitive tasks and therefore can be programmed, a different scenario appears to be the on of service robots where complexity of the environment brings them to be design to be very autonomous since they have to be able to predict the environment uncertainties, to detect and accommodate people and so on.
Of course there is also a cost issue, which necessitates the personal robots to be low-cost, therefore it they are the most complicated since the need high levels of autonomy and low costs. In human robot interaction extremely important become the interface mechanism, industrial robots are often limited, in fact they hard programmed and programming language and simulation softwares appear to be intermediary between the robot and the human. Service robots of course require richer interfaces and therefore distinguished are indirect and direct interaction methods. Indirect interaction consists of a person operating a robot through a command, while direct interaction consist of a robot taking decision on its on in parallel with a human.
Different technologies exist in order to achieve different method of communication, an interesting example appears to be the Robonaus (Ambrose et al., 2001), a master-slave idea demonstrating how a robot can cooperate with astronaut on a space station. Speech synthetisers and screens also appear to be interesting direct interaction methods.
Investigating humanoids and appearance, together with social aspect of service robots are also important aspect which researched are today investigating for the future of robotics.
Key Concepts
Human Robot Interaction, Human Robot Cooperation