Robotics Assistance to disabled and elderly people

P. Hoppenot, E. Colle
IMACS'2000, Lausanne, Session 170, abstract pp.261, 21-25 August 2000.

Abstract : Disabled people assistance is developing thanks to new technologies. Mobile robotics is one of them. In collaboration with AFM (French Association against Myopathies) we develop a project of a manipulator arm embarked on a mobile base. The present work deals with the displacement command of the robot. Low cost constraints impose the choice of sensors of mean capacities perception: ultrasonic sensors, odometry and low-cost camera for information feedback and goal-tracking. The approach is developed in two stages. The first one consists in giving the maximum autonomy capacities to the robot. The second one is the study of the Man-Machine Co-operation (MMC).

For the first one, Three action means have been studied. Firstly, the robot has planing capacities. In a partially known environment (architect's plan of the flat) it can compute a path to go from its position to the goal fixed by the human operator. Two kinds of automatic planing are possible depending on the way used by the person to point out the goal. The first one consists in finding a path with a point designed directly on the plan of the environment. Visibility graph and A* algorithm are used. The cost function evaluated can be the smallest distance but it is possible to penalise some parts of the flat if it is known that the navigation or the localisation would be difficult in it. The second way of designing a goal is by using the camera. Indeed, it can track an object, chosen by the person, and keep it in the middle of the image by moving in pan and tilt directions.

The second action mean of the robot is the navigation capacity. The path planed above has to be followed by the robot to reach the goal. Two behaviours are used. The first one is the goal attraction meaning that the robot goes ahead to the goal. The speed of the robot depends on the inverse distance between the robot and the goal, and the orientation of the robot. This is self-sufficient if the environment is totally known. In the case there are unknown obstacles on the planed path, obstacle avoidance must be performed. Using ultrasonic sensors, it is possible to detect the presence of obstacles. This information permits to prevent the robot from hurting an object. A fuzzy logic controller is used to perform this behaviour. The reason is that it is very near to the human way of avoiding obstacles. Very simple if-then rules have been defined to realise that task. For example if the distance on the right is greater than the distance on the left then go to the right. The universe of discourse is shared in five subsets from zero to large for the distances measured by the sensors and negative big to positive big for the speed command. The controller has been tested on two different robots showing its insensitiveness to different kinds of measurements.

The third capacity of the robot is the localisation. To plan a trajectory and to follow it, the robot must know its position. This is performed at two levels: on-line and off-line. On-line localisation consists in computing the position of the robot during a mission. Here, odometry and ultrasonic measurements are used. In fact, odometry is a very simple system in which the initial position must be known. The main drawback is the localisation error increases with time. This is due to the incremental computing. The idea is to use ultrasonic measurements to correct these errors on-line. Indeed, ultrasonic sensors give a medium distance quality but the error is bounded. Off-line localisation consists in finding the position of the robot if the robot is lost. This is useful when the on-line localisation fails. Here, only ultrasonic measures are used. The problem is then to match them to the knowledge of the environment. The position is calculated in three steps. The pre-processing step consists in merging measures to build segments. The second step makes the assumption that the room is rectangular. The computed segments are merged to build rectangles that are matched with the known environment. At that step, several positions of the robot are possible. The last step chooses the best solution. First, a cost function reduces to two the number of solutions (symmetry of the rectangle). The ambiguity is solved thanks to the door used as a discriminating element.

When the maximum autonomy is given to the machine, the issue is for the person to use the robot.. So, the second stage of the assistance is the study of the Man-Machine Co-operation (MMC). Indeed, the aim is to perform a mission (mobile robot displacement) with the robot capacities and the man possibilities. The main problem is then task allocation between the two intelligent entities. Each one has planing, navigation and localisation abilities. Both can be used, the aim being to satisfy the person. Indeed, the disabled person does not want the robot "to do for her/him" but wants to act. The idea is then to propose to the person, depending on the handicap, different solutions to reach the goal, from pure teleoperation to totally autonomous displacement of the robot.. Enhanced reality techniques are used in the Man-Machine Interface (MMI) to present feedback information to the human supervisor (ultrasonic sensors measures on the flat plan). Video image feedback permits the person immersion in the robot reality during the mission. A more specific study has been performed about the localisation error detection which is very important for automatic planing and navigation.

Key words: Disabled people assistance, man-machine co-operation, mobile robotics.