Editor choice


A robot can't kill you?

Today, there are cases of robots being involved in the death of people. The recent fatal accident at the Volkswagen plant in Baunatal, Germany, attracted special media attention. The worker was captured by the robot's manipulators and pressed against a metal plate. But this incident is strikingly similar to one of the first recorded cases of fatal industrial injuries on a robotic line, which occurred 34 years ago.

Similar incidents have happened before and will happen in the future. Even though the requirements of safety standards are constantly increasing, and the probability of an accident in any situation of human interaction with a robot is decreasing, such events will occur more often. Simply because of the ever-increasing number of robots in factories.

Therefore, it is very important to properly cover such incidents, using accurate data and appropriate language to describe them. Although there is a feeling that the incident in the Bounatal is being served, legally, as a case "when a robot kills an employee." This has been reported in many media, but it is a delusion bordering on irresponsibility. It would be much more correct to report this case as "a worker died in a robot accident."

By all accounts, the message presented in this way is not so attractive to the public, but that's the whole point. In fact, despite the information from science fiction and despite what may happen in the distant future, today's robots have no real intentions, emotions and goals. And contrary to the latest panic statements, they are going to acquire these capabilities in the near future.

They can "kill" only in the sense of a hurricane (car or gun) can kill a person. Robots can't kill the way some animals can, let alone human killers. However, it is murder that will probably come to mind for most people when they come across the message that "a robot killed an employee."

High stakes

Defending the accuracy of formulations is not an academic exercise in pedantry. The fact is that the stakes are high today. On the one hand, unfounded fears of robots can lead to another, unnecessary "freeze" in the development of artificial intelligence if funding for scientific research is suspended. This will lead to a delay in development or denial of the significant advantages that robots can give not only to industry, but also to society as a whole.

But even if you don't believe in the advantages of robots, you should still be interested in an accurate description of the problem. So robots are not responsible, only humans are responsible for what robots do. However, with the spread of robots, it will increasingly appear that they are actually completely autonomous and have their own intentions, so it will seem that they can and should be responsible.

Although, perhaps, eventually there will come a time when the appearance will correspond to reality, but this will be preceded by a long period of time (which has already begun) when these external manifestations will be false. Even now we are already trying to classify our relationship with robots in such a way that we are responsible for "that" and they are responsible for "the other". At the same time, there is a danger of making robots scapegoats, while removing responsibility from developers, customizers and users.

The right robots or the right robots created?

It's not just about the messages about robots, which must be correctly and accurately stated. Politicians, vendors, as well as researchers and developers who create robots of today and tomorrow, need to refrain from rash actions. Instead of asking: "How is it better to make the right robots?", we should ask the question: "How is it right to create robots?".

If these subtle nuances of language are perceived, it will lead to big changes in approaches to robot design. For example, when trying to give robots moral laws, we must provide them with a human level of common sense to apply these laws. But it will be much more difficult to do this. Instead of following such perplexing design standards, we could strive to create machines that embody the developers' own moral values, just as we are trying to implement ethical design outside of robotics.

In the Volkswagen accident, a company representative allegedly said: "initial findings show that human error, not problems with the robot, was to blame." Other sources reported that it was more a human error than the "fault of the robot" or its "responsibility". This may lead to the conclusion that in other circumstances the robot may become the culprit of the accident.

Even if it was a "problem with the robot", whether it was component defects, faulty blocks, errors in the program, flaws in installation or operational protocols, such abnormal situations will still occur due to human error. Yes, there are industrial accidents where no one person or group of people is to blame. But we should not seek agency relations with robots in order to release their creators from responsibility. Not yet.

Share with friends:

Write and read comments can only authorized users