Technology

Humans are ready to take advantage of benevolent AI –ScienceDaily

Humans expect AI to be merciful and reliable. At the same time, new research reveals that humans do not like to cooperate or compromise with machines. They can even abuse them.

Imagine yourself on a narrow road in the near future. At that time, another car suddenly appears from the corner ahead. It is an autonomous vehicle with no occupants on board. Do you want to move forward or give way? Most of us now act kindly in such situations where others are involved. Can you show the same kindness to self-driving cars?

An international team of researchers from LMU and the University of London is using behavioral game theory techniques to find out if humans behave like other humans in collaboration with artificial intelligence (AI) systems. We conducted a large-scale online survey.

Cooperation unites society. Often you have to compromise with others and accept the risk that they will disappoint us. Traffic is a good example. We lose a little time when we allow others to pass in front of us, and we resent that others don’t return our kindness. Machines will do the same. Is it?

Abuse the machine without feeling guilty

Studies published in the journal Eye science We know that people have the same level of trust in AI as humans when they first encounter it.

The difference is then. People aren’t ready to retaliate with AI and instead use their mercy for their own benefit. Returning to the traffic example, a human driver gives way to another person, but not a self-driving car.

This study identifies this reluctance to not compromise with machines as a new challenge to the future of human-AI interaction.

Dr. Jurgis Karpus, a behavioral game theorist, philosopher and first author of the book at LMU Munich, explains: the study. “We modeled different types of social encounters and found consistent patterns. People expected artificial agents to be as supportive as other humans, but they were merciful. I didn’t return much and abused AI more than humans. “

From the perspective of game theory, cognitive science, and philosophy, researchers have found that “algorithm exploitation” is a surefire phenomenon. They reproduced the findings in nine experiments with approximately 2,000 human participants.

Each experiment examines different types of social interactions and allows humans to decide whether to compromise and cooperate or to act selfishly. The expectations of other players were also measured. In the famous game, Prisoner’s Dilemma, people have to trust that other characters won’t let them down. They accepted the risk to both humans and AI, but often betrayed AI’s trust to earn more money.

“Cooperation is maintained by mutual betting. I believe you will be kind to me. You believe I will be kind to you. The biggest concern in our field is people. But we don’t trust machines, but we show that they are! ”Professor Bahrami, a social neuroscientist at LMU and one of the senior researchers in this study, said: It states as follows. “But they’re fine with letting go of the machine. That’s a big difference. People don’t feel very guilty when they do that,” he adds.

Benevolent AI can backfire

Biased and unethical AI has made many headlines, from the 2020 exam blunder in the UK to the judicial system, but this new study draws new attention. The industry and legislators strive to ensure that artificial intelligence is benevolent. But mercy can backfire.

If people think AI is programmed to be merciful to them, they won’t want to cooperate. Some of the accidents involving self-driving cars may already give real-life examples. The driver recognizes the self-driving car on the road and expects it to give way. Self-driving cars, on the other hand, expect to maintain the usual compromises between drivers.

“Abuse of the algorithm will have more consequences in the future. If humans are hesitant to join a polite self-driving car from the side road, the self-driving car will be polite for it to be useful. Need to be more aggressive, not? ”Jurgis Calpas asks.

“A merciful and reliable AI is a buzzword that excites everyone. But fixing AI isn’t everything. If you find that the robot in front of you is supportive no matter what, it’s selfish. The ethical implications of integrating autonomous robot soldiers and human soldiers, Professor O’Filia Delois, a philosopher and senior author of this study, is a Norwegian peace study. We are working with Oslo. “Compromising is the oil that makes society work. To each of us, it looks like a small act of selfishness. For society as a whole, it can have a far greater impact. There is. If no one lets the self-driving car participate in the traffic, they will cause their own traffic congestion on the side and will not facilitate transportation. “

https://www.sciencedaily.com/releases/2021/06/210610135534.htm Humans are ready to take advantage of benevolent AI –ScienceDaily

Back to top button