Quantcast

Three Laws Safe: The Morality of Autonomous Cars to Killer Robots

By Wesley Fenlon

As researchers study the risks of autonomous killer robots, we wonder how feasible the self-aware artificial intelligence of science fiction truly is.

It was little more than a popular Internet joke, and then the University of Cambridge got involved. Researchers at Cambridge founded the Centre for the Study of Existential Risk to weigh the dangers of biotechnology, artificial life, nanotechnology, and climate change. That includes a killer robot uprising, the possibility of which the Centre says would be dangerous to ignore. The Internet's jokes about Skynet have at last found some ground in real scientific research.

But what kind of robots should we be worried about, anyway? Modern military technology isn't exactly one minor breakthrough away from an all-encompassing security network like Skynet or nigh-invincible killing machines like the Terminator. Today's artificial intelligence barely constitutes as intelligence, even. Robots are not self-aware. They do not learn, adapt, or remember--at least not like humans or the robots of science fiction.

Robotics hardware still has a long way to go before it can even produce humanoid bots that stand, walk and run without falling over on tricky terrain. Software will be the far greater challenge. Sci-fi-like artificial intelligence conveniently glosses over the immense challenge of programming a thinking, self-aware brain.

Even so, Cambridge isn't the only group worried about a robotic rampage. The Human Rights Watch has published a report outlining concerns about autonomous killer robots. While we don't have fully autonomous killer robots yet, the report warns against governments adopting autonomous bots, "which would inherently lack human qualities that provide legal and non-legal checks on the killing of civilians." Perhaps more importantly, autonomous bots would also make it more difficult to pin responsibility on anyone for military actions.

How much responsibility does a human commander hold when a fully autonomous robot kills a civilian? How much easier will it be for dictators to conduct genocide with bots that have no emotion? The Humans Rights Watch believes that it will be much harder to prevent usage of autonomous bots once they've been adopted. They're probably right. The Department of Defense recently established guidelines intended to "minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements." Those are certainly valuable guidelines, but they also foretell an increased interest in autonomous military robots.

And despite the Human Rights Watch's concerns, there's a strong appeal to autonomous warfare.

Soldiers will no longer have to risk their lives on the front line. Robots can be far more efficient than humans, and people and machines working in tandem can be better still. A DARPA project using computer-assisted vision helped human watchers pick up on unconscious recognition of changes--things they didn't actively register as important, but were able to pick up on with cameras and brain activity monitoring.

All these concerns ultimately tie together into one broad concern: morality. We want machines to be able to make ethical decisions, but that goes back to the issue of artificial intelligence. Will it ever be good enough to turn an autonomous soldier into something with decision-making skills? Mercy? Self-awareness or self-preservation?

The New Yorker believes that even robots that aren't built to kill will need morality in the future. Their example: Google's self-driving cars, which will likely pave the way for highways that vaguely resemble I, Robot's neatly organized sea of autonomous cars. They believe autonomous cars will need ethics:

"Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call."

They reference Asimov's Three Laws of Robotics, but wisely point out that the judgments Asimov's robots make to follow those laws--like qualifying "harm" to a human being--are wildly beyond the capabilities of today's AI. And is that the kind of morality we want to instill in our cars, anyway? Or is there a simpler solution--building better airbags and other life-saving devices into cars and outfitting them with a blanket behavioral procedure to minimize impact speed whenever possible?

The New Yorker's last concern--that the laws of robotics treat robots as slaves, and are therefore unethical themselves--doesn't seem like a true issue for decades. Only if robots become self-aware should we consider them more than machines, and that's a mighty big if. By the time our programming skills are that advanced, we may have already solved the quandaries of robot morality.