Quantcast

Time to Die: Could You Kill a Robot Begging for Its Life?

By Wesley Fenlon

A study shows that people treat robots much like other humans when they display intelligence and personality.

When Sarah Connor growls out "You're terminated" and crushes the relentless machine in The Terminator, we all cheer. But when the Terminator gives that thumbs up at the end of Judgment Day, after bonding with John Connor and saving his life, we shed a tear. Even when they're not scary or badass or sympathetic movie characters, we for some reason care about robots. But it's hard to say how exactly human-bot relationships compare to relationships between humans, which is why robotics professor Christoph Bartneck designed an experiment to test how we interact with "people" made from metal and plastic and circuitry. Turns out, watching a robot beg for its life is seriously disturbing.

Bartneck's study built on the moral quandary of HAL from 2001: A Space Odyssey. "By pulling out HAL’s memory modules the computer slowly loses its intelligence and consciousness. Its speech becomes slow and it its arguments approach the style of children. At the end, HAL sings Harry Dacre’s “Daisy Bell” song with decreasing tempo, signifying its imminent death," Bartneck writes. "The question if Bowman committed murder, although in self-defense, or simply conducted a necessary maintenance act, is an essential topic in the field of robot ethics...Various factors might influence the decision on switching off a robot. The perception of life largely depends on the observation of intelligent behavior."

The study seeks to grapple with the issues sci-fi has been asking for decades--are robots alive? What constitutes life? But it's not really trying to answer those questions. Instead, the study builds on previous research that was originally conducted with computers, which was, itself, based on a simple concept in human relationships: reciprocity.

Bartneck writes: "Nass and Reeves showed that computers are treated as social actors and that the rules of social conduct between humans also apply to some degree to human machine interaction. In particular they showed that the social rule of Manus Manum Lavet (“One hand washes the other”, Seneca) applies. A computer that worked hard for the user was helped more in return compared to a computer that worked less hard. It can be argued that a person’s agreeableness affects its compliance with this social rule. A more agreeable person is likely to comply more with it compared to a disagreeable person. However, it is not clear if the agreeableness of the artificial entity also has consequences for its animacy. Being perceived and treated as a social actor might lead to an increased perception of animacy. The second research question of this study is if users are more hesitant to switch off a robot that is agreeable compared to a robot that is less agreeable."

The study he refers to, carried out by Stanford professor Clifford Nass in 1996, found that participants would help a computer that had been feeding them useful answers far more than a computer programmed to be all but worthless. More interestingly, they'd actually help a computer they hadn't interacted with more than the worthless computer. It's the scientific version of a truth we know all too well: People get pissed at machines.

But what happens when that machine has human characteristics?

The iCat robots in Bartneck's study, like Nass', were split into helpful and unhelpful groups, and were further divided into agreeable and non-agreeable categories. One would be helpful and warm, the other curt when talking. They also had faces, with 13 servos moving their eyes and mouths. At the end of the experiment, participants were asked to shut down the robots, wiping their memories; as the participants turned a dial to shut off the robots, the robots' speech slowed down and they asked to not be turned off.

Participants were asked to shut down the robots, wiping their memories, while the robots asked to not be turned off.

Every participant turned the robot off, but the study confirmed the theory of reciprocity: "The robots [perceived] intelligence had a strong effect on the users’ hesitation to switch it off, in particular if the robot acted agreeable. Participants hesitated almost three times as long to switch off an intelligent and agreeable robot (34.5 seconds) compared to an unintelligent and non agreeable robot (11.8 seconds). This does confirm the Media Equation’s prediction that the social rule of Manus Manum Lavet (One hand washes the other) does not only apply to computers, but also to robots. However, our results do not only confirm this rule, but also it suggests that intelligent robots are perceived to be more alive. The Manus Manum Lavet rule can only apply if the switching off is perceived as having negative consequences for the robot. Switching off a robot can only be considered a negative event if the robot is to some degree alive."

Watching the robot's face shut down and hearing its voice slow and slur is disturbing, but we could easily imagine scientists designing a study with a more complex bot being downright horrifying. Don't do it, Japan.