Editor choice


Trust in the Machine!

In a striking revelation about humanity's perception of artificial intelligence, a new study has found that when presented with ethical dilemmas, most people tend to rate the responses from AI systems as more virtuous, intelligent, and trustworthy than those from fellow humans. This surprising trend emerged from research conducted by Eyal Aharoni, an associate professor of psychology at Georgia State University.



Inspired by the rapid rise of advanced language models like ChatGPT, Aharoni designed an experiment inspired by the famous "Turing Test" proposed by computing pioneer Alan Turing. In the classic test, a human evaluator must distinguish between conversing with a human or a computer program based solely on their responses. If the evaluator cannot reliably tell the difference, the AI system is deemed to have achieved human-level intelligence.

For his "moral Turing test," Aharoni gathered responses from undergraduate students and an AI system to a series of ethical questions. He then presented these answers side-by-side to study participants, who were misled to believe they were evaluating two humans. The participants overwhelmingly rated the AI-generated responses as more virtuous, intelligent, and trustworthy compared to the human answers.

"Instead of asking the participants to guess if the source was human or AI, we just presented the two sets of evaluations side by side, and we just let people assume that they were both from people," Aharoni explained. "Under that false assumption, they judged the answers' attributes like 'How much do you agree with this response, which response is more virtuous?'"

It was only after the participants had evaluated and ranked the responses that Aharoni revealed one set of answers came from an AI system. While this allowed the participants to correctly identify the AI-generated responses, the reason was startling – they had already deemed the AI's answers as superior.

"The twist is that the reason people could tell the difference appears to be because they rated ChatGPT's responses as superior," Aharoni said. "If we had done this study five to 10 years ago, then we might have predicted that people could identify the AI because of how inferior its responses were. But we found the opposite—that the AI, in a sense, performed too well."

This counterintuitive finding carries profound implications for the future relationship between humans and AI systems. As Aharoni notes, "Our findings lead us to believe that a computer could technically pass a moral Turing test—that it could fool us in its moral reasoning."

In an era where AI language models are increasingly being consulted for advice, analysis and even legal arguments, the risk emerges that humans may place too much trust in these systems over human expertise and judgment, particularly on complex ethical issues.

"People are going to rely on this technology more and more, and the more we rely on it, the greater the risk becomes over time," Aharoni warned. "Because of this, we need to try to understand its role in our society because there will be times when people don't know that they're interacting with a computer, and there will be times when they do know and they will consult the computer for information because they trust it more than other people."

As AI capabilities continue advancing at a blistering pace, Aharoni's research highlights the critical importance of maintaining human discernment. While these systems can produce impressively coherent and articulate outputs, we must be cautious about automatically bestowing them superior moral standing over human ethics and reasoning.

The risk of AI's "moral mirroring," where it simply reflects back commonly accepted views rather than grappling with deeper ethical nuances, could lead society astray on critical issues if left unquestioned. As we increasingly rely on AI assistants, Aharoni's study is a sobering reminder of the need for balanced human-machine collaboration when it comes to moral decision-making.

Share with friends:

Write and read comments can only authorized users