People couldn’t turn off this robot after it begged ‘No!’

A group of scientists asks participants in a study to perform a simple task. Switch off Nao, the cute, human-like robot in front of you. Just turn it off. Easy enough.

Until the robot starts begging. “No! Please do not turn me off!” It confesses that it’s scared of the dark. So a little less than half of the humans confronted with that reaction simply — refuse to turn it off.

The results of the study, published in the journal PLOS One, involved 89 volunteers roped into completing tasks with help from Nao. They were led to believe this was all mainly about using a series of questions to help improve’s Nao’s intelligence, but it was really a cover for — well, the purpose is right there in the title of the study’s findings. “Do a robot’s social skills and its objection discourage interactants from switching the robot off?”

Turns out, they kind of do.

Forty-three of the study participants were confronted with Nao begging not to be turned off. The robot’s intense reaction convinced 13 of them to not go through with it and to leave the robot on. For the other 30, they took twice as long to turn off Nao as did the test participants who weren’t confronted with the reaction at all and simply turned it off.

As an abstract of the study explains, “People were given the choice to switch off a robot with which they had just interacted. The style of the interaction was either social (mimicking human behavior) or functional (displaying machinelike behavior). Additionally, the robot either voiced an objection against being switched off or it remained silent.

“Results show that participants rather let the robot stay switched on when the robot objected. After the functional interaction, people evaluated the robot as less likable, which in turn led to a reduced stress experience after the switching off situation. Furthermore, individuals hesitated longest when they had experienced a functional interaction in combination with an objecting robot. This unexpected result might be due to the fact that the impression people had formed based on the task-focused behavior of the robot conflicted with the emotional nature of the objection.”

In other words, it seems to be the case that robots who are made to look and act more human-like tend to produce if not a human response from us than at least a response from us that treats it as something a little more than a machine.

Some of the responses from people who were moved by Nao’s reaction and didn’t turn it off included “I felt sorry for him” and “Because Nao said he does not want to be switched off.”

A key point: Aike Horstmann, a student at the University of Duisburg-Essen who led the study, told The Verge that we shouldn’t take away from this that we’re all easily susceptible to emotional manipulation by machines. It’s more an issue of — they will increasingly be a ubiquitous part of our world and this kind of emotional blind spot is something we just need to be aware of.

“I hear this worry a lot,” Horstmann told The Verge. “But I think it’s just something we have to get used to. The media equation theory suggests we react to [robots] socially because for hundreds of thousands of years, we were the only social beings on the planet. Now we’re not and we have to adapt to it. It’s an unconscious reaction, but it can change.”

Source: Read Full Article