Follow me

Showing posts with label Safety. Show all posts
Showing posts with label Safety. Show all posts

Monday, October 8, 2012

Human-Assisted Virtual Environment Modeling


J.G. Wang, Y.F. Li
Autonomous Robots, N0.6, 1999

Summary
                  The paper proposes a man-machine interaction based on stereo-vision system. where the operator’s knowledge about the system is used as a guidance for modelling a 3D environment.
Virtual Environment (VE) modelling appears to be a key point in many robotic systems, specially in regard of tele-robotics. There have been many researches on how to build VE starting from vision sensors while exploring unknown environments and semi-automatic modelling with minimum human interaction. A good example of integrated robotic manipulator system using virtual reality (Chen and Trivedi, 1993, Trivedi and Chen, 1993) visualization to create advanced, flexible and intelligent user interfaces. An interactive modelling system was proposed in order to model remote physical environments through two CCD cameras, where edge information is used for stereo-matching and triangulation to extract shape information, but the system was constrained by the only motion of the camera on the Z axis.
The proposed system is performing in order that the operator can minimize the cues about the features and information the manipulator or mobile robot may encounter. The procedure followed sees first a local model build from different view point and later these local models composing a global model for the environment, once the environment has been constructed virtually, then the operator can fully concentrate in tele-operation.
Considering the use of two cameras, left and right, then two transformation matrices can be obtained: [HR] and [HL] these can be used for calculating W, the corresponding known image coordinate feature points of the 3D coordinate feature points. So in the end, assuming the 3D vector in W as [V3D] and the correspondent 2D vector [V2D], then [H]= [V2D] [V3D]T[[V3D] [V3D]T]-1 , where H can be decomposed in left and right matrices. Further on, if we assume [HR] and [HL] available, [X]=[x,y,z] of a feature in W can be calculated with its corresponding image coordinates [xa ya], [xb yb], so that [X]=[[A]T[A]]-1[A]T[B], where [A] and [B] are image coordinates.
A major difficulty though in stereo vision is the correspondence problem between the feature points in two images, due to poor robustness. A human operator can therefore identify objects in most of the scenes, prompting the vision system to locate and detect some object attributes or special corresponding feature so that the image coordinates would be deducted and the 3D position in W calculated.
A binocular stereo vision, after been guided by an operator to find some correspondent prompted feature, can be used to construct the local models of objects directly. The system work in recognizing primitive solids, from which is later possible to computer composite models. The authors introduce the cuboid (for which four points are detected) and the sphere (for which determination in a 3D space is obtained through the knowledge of radius and center), which through geometrical calculations and transformations can be obtained. Vertexes of objects are found through the intersection of corresponding lines, for other more complicated objects operator’s guidance can be used. In general only one point of view cannot successfully represent a 3D object, more than one is required and therefore Multi-Viewpoint Modelling is used. Therefore from two positions (for instance A and B) a transformation M-1 takes place, determining M rotation and translation are solved separately. If C and C’ represent the coordinate relationships between view point A and B, then C’=M’C and W=M’W’; after some computation M=[R T], with R rotational component and T translational component.
Key Concepts
Machine vision, Human Robot Interaction
Key Results
Performance can be studied either with different between point and their image or with the different between measured and real size objects. The system also work with insertion tasks with an error of 0.6 mm, in case of the need of a more precise system, force sensing would then be needed. Operators can use this methodology for observing real environment from any view points on the virtual reality system.

Saturday, October 6, 2012

Toward a Framework for a Human-Robot Interaction


Sebastian Thrun
Human-Computer Interaction, No.19, 2004

Summary
                  The field of robotics has undergone a considerable change from the time it first appeared as a complete science, robots now perform many assembly and transportation tasks, often equipped with minimal sensing and computing, slaved to perform a repetitive task. The future is more and more seeing the introduction of service robots and this is mainly thanks to reduce in costs of many technologies required and increase in autonomy capabilities.
Robotics appears to be a broad discipline and therefore definitions of this science are not unique, a general definition has been done by the author in a previous paper (Thrun, 2002) a system of robotic sensors, actuators and algorithms. The United Nations has categorized robotics in three fields: industrial robotics, professional service robotics and personal service robotics.
Industrial robotics are the earliest commercial success; an industrial robot operates manipulating its physical environment, it is computer controlled and operates in industrial settings (for example on conveyor belts).
Industrial robotics started in the 60s with the first commercial manipulator, the Unimate, later on in the 70s Nissan Corporation automated an entire assembly line with robots, starting a real “robotic revolution”, simply it can be considered that today the ration human to worker to robots is approximately 10:1 (the automotive industry is definitely the one with biggest application of robotics). However industrial robots are not intended to operate directly with humans.
Professional service robots are the younger kind of robots and are projected to assist people, perhaps in accessible environments or in tasks where speed and precision won’t definitely be met by human operators (as it is becoming more common in surgery).
Personal service robots posses today the highest expected growth rate, they are projected to assist people in domestic tasks and for recreational activities, often these robots are humanoids.
In all three of these fields two are the main drivers: cost and safety, these appear to be the challenges of robotics.
Autonomy refers to the ability the robot has to accommodate variation in the environment, it is a very important factory in human-robot interaction. Industrial robots are not considered to be highly autonomous, they often are called for repetitive tasks and therefore can be programmed, a different scenario appears to be the on of service robots where complexity of the environment brings them to be design to be very autonomous since they have to be able to predict the environment uncertainties, to detect and accommodate people and so on.
Of course there is also a cost issue, which necessitates the personal robots to be low-cost, therefore it they are the most complicated since the need high levels of autonomy and low costs. In human robot interaction extremely important become the interface mechanism, industrial robots are often limited, in fact they hard programmed and programming language and simulation softwares appear to be intermediary between the robot and the human. Service robots of course require richer interfaces and therefore distinguished are indirect and direct interaction methods. Indirect interaction consists of a person operating a robot through a command, while direct interaction consist of a robot taking decision on its on in parallel with a human.
Different technologies exist in order to achieve different method of communication, an interesting example appears to be the Robonaus (Ambrose et al., 2001), a master-slave idea demonstrating how a robot can cooperate with astronaut on a space station. Speech synthetisers and screens also appear to be interesting direct interaction methods.
Investigating humanoids and appearance, together with social aspect of service robots are also important aspect which researched are today investigating for the future of robotics.
Key Concepts
Human Robot Interaction, Human Robot Cooperation

Friday, October 5, 2012

Trajectory Prediction for Moving Objects Using Artificial Neural Networks


Pierre Payeur, Hoang Le-Huy, Clement M.
Human Factors: The Journal of the Human Factors and Ergonomics Society
Summary
                  Prediction of objects trajectory and position is very important for industrial robots and servo systems, where this kind of information might be used for control, capture and other observation issues.
Parts’ trajectories are often unknown and catching object problem can be useful for example in a typical loading/unloading problem of a robot grasping an object (for instance s box) from a slide or a conveyor.
In such a condition of course accuracy is strictly important, both for achieving the goal and for safety purposes.
An optimum off-line method is path planning based on cubit polynomials with dynamics added (G. Sahar, Hollerbach, 1986), but this method involved a lot of computation and it is done off-line, being not flexible.
The authors propose the use of Artificial Neural Networks (ANN), which can be computed faster and most of all can learn the system, being adaptable and flexible. The problem can be defined as the prediction of the trajectory of a moving object in real time, with minimum error, with no collision on an arbitrary path.
·       Model
Time t0 is defined as the time in which the manipulator starts moving, the prediction is done with the period of time T. The trajectory model is defined with a cubic function: χ(t)=1/6α0t3+1/2β0t2+γ0t+χ0, where χ represents the position, orientation, velocity or acceleration according on needs.
Neural network are trained to define the values of the coefficients α0, β0, γ0 at each change of trajectory for which they have to be recalculated, the presence of pre- and post-conditioning allows the need of a long network so that in the end there are 3 inputs, two layers of 20 hidden cells each and one output cell.
The global predictive structure is a multiplication of this basic topology by the number of d.o.f. .The Network is modeled so that it basically obtains information about the change in position and not the exact position, this method ensures reduction of calculations, but pre and post conditioning are then of course required in order to ensure the start and continuation of the calculations. Sampling period T too many times also should be avoided, so the preconditioning module computes differences between the known positions and feeds the network is pseudo-displacement value, so the network will process it and generates the anticipated pseudo-variation of the position or orientation, the post-conditioning module finds then velocity and acceleration through kinematics. The processed data from the neural nets is normalized so that the maximum value is lower than the activation function maxima (in this case a bipolar sigmoidal activation function with a range from -1.0 to 1.0). Between two successive sampling there is a maximum coordinate variation ∆pmax, while limitation in resolution is the minimum detection possible (∆presolution), so that the number of possible steps to consider is STsingle=2∆pmax/∆presolution+1, which must be globally should be powered by three for each of the inputs. The size of the data can be reduced since two successive position or orientation variation are always of a similar amplitude with a constant sampling rate, so it can be assumed hat there is a maximum difference, tables (Appendix C) help for calculations shown at page 151. Once the input triplet is processed with the cubic model already introduced (to ensure that the output data presented to the network are exact), these are normalized between ±0.8 to avoid extremes, at that point the back-propagation algorithm and the pseudo-random presentation order are used for training the data (triplets are not presented in a fixed order, but the entire set must be used within one epoch).
Key Concepts
Artificial Neural Network, Path Planning, Trajectory Prediction
Conclusion
The method appear to be optimal and working as the polynomial trajectory model but being more flexible. Anticipated values are also more reliable and variances are more similar between axis so that errors could be improved and controlled.

Tuesday, October 2, 2012

Human – Centered Robot Navigation – Towards a Harmoniously Human – Robot Coexisting Environment


Chi-Pang Lam, Chen-Tun Chou, Kuo
IEEE Transaction on Robotics, Vol. 27, No.1, February 2011
Summary
                  Harmonious coexistence of robots with humans is an in interest for improving HRI and the navigation of robots in human environments is rarely studies. Two are the main issue to be addressed: the robot should be able to move autonomously and safely in a human – robot environment in order to complete a specific task and the robot should behave in both human and robot friendly manner during operations, this last issue is the one covered in the authors research. In the literature there are different proposed solutions as traffic rules (which would require to regulate humans’ behavior, which of course is not possible), artificial potential fields (which risk to stuck in local minima), nearness diagram (ND) navigation (which is a reactive navigation approach based on laser scanner), people tracking and so on. The authors propose a human-centered sensitive navigation (HCSN), imposing 6 harmonious rules for human and robot coexistence in the same environment: 1) Collision-free rule; 2) Interference-free rule; 3) Waiting rule; 4) Priority rule; 5) Intrusion rule; 6) Human rule (human have the highest priority). Robots may then have there own internal rules. Also 6 sensitive field have to be defined. The Human Sensitive Field is around the human and assumes an egg shape, bing longer ahead of the human (elliptic) and shorter behind (circle) since it must consider human motion, this field has the highest priority. The Stationary Robot Working Field is the robot working (or just waiting) in a fixed location and has the scape of a disc, it is second in the priority rule. The Movable Robot Working Field is the robot working while moving, it is third in the priority rule and also has a donut shape. The Robot Normal Field is the idle o not working robot, it presents a disc shape and has lowest priority. The Human-Robot Stationary Joint Field present both human and robot working together in stationary position, it assumes a circle shape with the center in the middle between the human and robot. The Human-Robot Moving Joint field also assumes an egg shape (whit longest axis facing the operator). The two combination fields assume the same priority as the Human Sensitive Field, for simplicity the 6 states are denominated as follow in the same order: H1, R1, R2, R3, HR1, HR2. The HCSN design consist of a VLH (Virtual Laser Histogram) which collects all the virtual distance data within 5 meters, this information is used for sensitive-field sensing, while situation identification, together with sensitive field sensed information inputs the motion planner, which then inputs the controller, providing in the end a final output.
Unluckily humans are not easy to localize and paradoxically information regarding them is much less than the ones regarding robots. The motion planner involved the use of ND navigation to fin out a suitable free walking area, while the potential field approach is later used to establish different potential field near the walking area we choose. Different cases are presented, it must be noted that in the priority rule, highest priority may not necessary be assigned to the closed to obstacle, but there is a risk factor applied in order to under preview future closest obstacles, in order to ensure smooth motion.
The priority rule is then used in the calculation of the potential field, which its negative gradient is taken to define the attractive force. Similar computation are then obtained for repulsive force and for repulsive force associated with the soft field (which is the flexible field regarding R2 and R3). The overall force is then the sum of the three and this will determine the sub-goal position, with a matter of a D0 factor which is a look-ahead distance which is kept smaller into the sensitive field, while the gain is kept smaller for more effective look ahead distance.
Key Concepts
Navigation, Human-Robot Cooperation, Human-Robot Interaction
Key Results
The method, although being complex, can be performed smoothly in real-time environments.

Real-Time Safety for Human – Robot Interaction


Dana Kulic, Elisabeth A. Croft
International Conference of Industrial Robotics, 2005
Summary
                  Typical robotic industrial applications see robots segregated in order to ensure safety, this is not acceptable in more and more flexible environments which require interaction with operators, while ensuring safety conditions. In order to ensure safety a complete system must then incorporate the following features: safe mechanical design, human friendly interfaces, safe planning and control strategies; the authors focus on the third aspect. There in the literature mainly three methods to mitigate risk: redesign to eliminate hazard, control the hazard through electronic or physical safeguards and warn the operator during operation or by training. While the last has been proven to not be effective, specially in presence of untrained users, the first one appears to be the case in which the aim is reducing the results of a possible impact and not the impact itself.
The key issue then is to understand when safety is threatened, this can be done using tactile and force sensors to identify unplanned contact or by considering every object in the environment as an obstacle and use real-time obstacle avoidance strategy. The proposed algorithm introduces a danger index, if the level of danger is low the plan can proceed, otherwise there will be a corrective decision, this safety system performs then in one-step-ahead condition, moving towards the location with the lowest danger.
The danger index appear to be the multiplication of distance factor, velocity factor and inertia factor, the multiplication ensures no active evasive action which would cause the evasion of the robot also in safe conditions.
The Distance factor is obtained considering a scaling factor kD and it is build so that when the distance between the critical point and the object is large (greater than Dmax) then the factor would go to 0. In a similar fashion the velocity factory is obtained. The inertia factor appears to be the ratio between the effective inertia at the critical point and the maximum safe value of the robot inertia.
·       Model
A one dimensional example is performed showing the robot’s behavior when there are not obstacle, one obstacle and two obstacles on the two robot’s sides. It is shown that in the case of two obstacles, the robot is capable of avoiding the obstacles and reaches a stable point with no oscillation.
The full algorithm (regarding all the robot arm) results in a local sequential planner, backtracking cannot be considered since it’s the case of real-time motion. In this case the damping force is obtained as: Fd=-Bq’ where q’ is the measured joint velocity. To calculate distances a set of spheres representation is used, being a ceservative method ensuring safety.
Parameter selection is obtained through physical characteristics of the robotic arm, Dmin can be estimated based on the maximum robot deceleration and velocity and Vmax can be estimated on indices of injury such as Gadd Severity Index or Head Injury Criterion.
Dmax is based on the physical size of the robot and Vmin must be smaller than 0 in order to ensure stability in the system.
Ranges for distances and velocity boundaries should be large so that there is time to react before the danger is imminent, for the velocities this ensures reduction of the effective damping.
Key Concepts
Safety, danger index
Key Results
Simulation has confirmed expected behavior, showing no oscillation in case of two obstacle in the robots proximity. The set of spheres ensures for the experiment a conservative system, so that highly accurate position sensing is not required. The method can be used for redundant and non-redundant manipulators, generating robot motion by minimizing the danger index.