Robots are increasingly infiltrating our daily lives. It can be incredibly useful devices (bionic limbs, lawnmower robots or couriers that deliver food to people in quarantine), or simply entertaining (dog robots, dancing toys and acrobatic drones). Perhaps the future possibilities of robots are limited only by our imagination.
But what happens when robots don’t do what we want them to do, or do it in a way that causes harm? For example, what if the bionic arm becomes one of the participants in the accident? The material deals with the specifics of such incidents and how they should be investigated.
Robot accidents are worrying for two reasons.
The increase in the number of robots, of course, hello to the growth of incidents with their participation.
We are getting better at building more complex robots. The more complex the design, the harder it is to figure out what could have gone wrong.
Most robots operate on the basis of various forms of artificial intelligence. AI is capable of making human-like decisions (although these decisions can be objectively good or bad). They can perform a variety of tasks, from identifying an object to interpreting speech.
AI is trained to make choices for a robot based on information from large datasets. The AIs are then tested for accuracy (that is, how well they do what we want them to do), and only then are the real problems given to the algorithm.
AI can be designed in different ways. For example, imagine a robot vacuum cleaner. It can be designed so that whenever it hits a surface, it will change course and move in a random direction. Another approach can be used: let the vacuum cleaner evaluate its surroundings, find obstacles, cover all areas of the surface and return to the charging base.
If the first cleaner receives input from its sensors, the second one tracks that input in an internal mapping system. In both cases, the AI receives the information and makes a decision based on it.
The more complex tasks a robot can perform, the more types of information it has to interpret. It can also be an evaluation of several data sources of the same type, such as, in the case of auditory data, live voice, radio and wind.
As robots become more complex and can act on a variety of information, it becomes even more important to determine what data the robot was acting on, especially when harm was done.
As with any product, things can go wrong with robots. Sometimes this is an internal problem, for example, the robot does not recognize the voice command. Sometimes external – if the robot’s sensor was damaged. And sometimes it can be both, for example, the robot is not designed to work on carpets and, as it were, “stumbles”. Robot accident investigations must consider all possible causes.
While it can be inconvenient if a robot is damaged when something goes wrong, we are much more concerned when a robot causes harm or cannot avoid harm to a human. Imagine a bionic hand failing to grab a hot drink, knocking it over the wearer, or a caregiver robot not sounding a distress signal when its physically weak user falls.
Why is robotic accident investigation different from human accident investigation? Robots have no motives. We want to know why the robot made the decision it made based on the particular set of inputs it had.
In the bionic arm example, was there a misunderstanding between the user and the arm? Perhaps the robot mixed up several signals? Was it suddenly jammed? When a human fell in front of a caring robot, could the car not “hear” the call for help because of the loud fan? Or he could not recognize the user’s speech?
Black box for the robot.
Robotic accident investigation has a key advantage over human accident investigation: a witness can be built into the robot. Commercial aircraft have a black box for this, designed to withstand a crash and serve as a source of data on the causes of the crash. This information is incredibly valuable not only to understand incidents, but also to prevent them.
A similar goal is set by RoboTIPS, a project that focuses on responsible innovation for social robots (that is, those that interact with people). The result was the creation of an ethical black box that records the robot’s input and corresponding actions.
This device is designed for the specific type of robot it will be installed in and can record all the information that the robot acts on, be it voice or visual data. It is tested on various robots in both laboratory and simulated accident conditions. The goal is for the black box to become the standard for robots of all brands and applications.
While the data recorded by the ethical black box still needs to be interpreted in the event of an accident, their availability is critical to being able to conduct an investigation.
A detailed analysis of each incident allows you to make sure that the same mistakes will not be repeated twice. A tool like the ethical black box will enable us not only to create better robots, but also to innovate responsibly.