Nov 10, 2008
The first learning and communicating robots, such as Sony’s Aibo, interact by simple methods: sensors under the legs, one on the back, another one on the chest and a camera and a microphone which allow them to link a face and a voice to the signs of interest that they receive. Humans can understand robots by recognising the states of their expression indicators (diode colours, sounds issued, etc.). Using networks and computer data exchange, robots can also transmit their experience to one another. Physical actions from humans towards robots produce quick and almost certain effects: a pat on the chest links a criterion of kindness to the person they see, and a tap on a sensor is a message that the last action made was bad (there is a big margin of error due to the fact that the human being does not always know which programmes are running). This way, these machines learn and adapt their behaviour to the contexts in which they find themselves. The result is already remarkable, as the same robot that follows a mother’s voice in one environment, will follow a cat who is communicating to it in another. But the big markets that are aware of learning and communicating robotics cannot be satisfied by these vague approximations. In the medical sector, actions have to produce the same effect every time, interpretations must be definite and get as close as possible to a common language and frames of reference. A robot helping students in a dental surgery must be easily informed by grimaces of pain, be able to interpret gestures, provide warnings on the nearness of nerves. A robot responsible for finding victims in a collapsed building must know how to communicate so that neither human nor robot have to guess what to say or do, and sometimes without contact. Machines by different makers that are far away from human beings, on Mars, for example, will have to be able to speak to each other. These topics are the main focus of the research of Professor Minoru Asada, from the University of Osaka. He is convinced that robots must be as close as possible to human beings, so that interaction is as transparent as possible and that learning can take place by imitation, and studies, among other things, domains as varied as the expression of feelings, sympathy, and textures and reactions of the skin.
Publié par Bruno Rives at 01:17