Giving robots morals may sound like a good idea, but it's a pursuit fraught with its own moral dilemmas. Like, whose morals?
Stop and look around you right now. You're sitting in front of a computer and, chances are, there's a phone or some other “smart” device in your vicinity. As our devices get more capable, and we become more reliant on them, there’s increasing hand-wringing over whether our relationships with technology have gone awry.
In some circles, the conversation has a particular urgency to it – because they’re talking about whether or not robots could - or should - be entrusted with life and death decisions, and whether such robots could ever be conferred with anything comparable to our morals.
This is more than just an obscure academic question. This past May, the United Nations hosted a conference on how to set guidelines for Lethal Autonomous Weapons, known more familiarly as Killer Robots. Around the same time, the United States Office of Naval Research announced a five year, 7.5 million dollar grant to study the possibilities for creating moral robots, from aerial drones to robotic caregivers. The five year program includes researchers from Rensselaer Polytechnic Institute, and Tufts, Brown, Georgetown, and Yale Universities.
Technological challenges (and there are plenty of them) aside, the prospect of creating robots with morals raises an intriguing question: Whose morals?
Here are three possibilities:
- The Geneva Conventions provide internationally agreed-upon rules of war. That, says roboticist Ron Arkin, a Regents' Professor in the School of Interactive Computing at the Georgia Institute of Technology, makes programming autonomous weapons with the capacity for moral decision-making "low-hanging fruit." But others, including Wendell Wallach, a scholar at Yale University’s Interdisciplinary Center for Bioethics and co-author of Moral Machines: Teaching Robots Right From Wrong, argue no robot should be given the power to take a human life. Period. Which brings us to ...
- Asimov's Laws of Robotics. The renowned science fiction author Isaac Asimov laid out three laws of robotics: that a robot should never harm a human, should obey human orders (unless they conflict with the first law), and can only protect its own existence as long as it is in line with the first two orders. He later added a zeroth law to supercede the others: a robot may not harm humanity. While these rules seem relatively simple, Asimov made a name for himself writing about how things can go unexpectedly wrong, even when the rules are applied.
- The Ten Commandments are another widely accepted code of human conduct that boils the complexities of daily life down to a manageable number of tenets.
Of course, we all know rules were made to be broken and we humans struggle to make moral choices - or disagree about what the moral choice is - in many situations. So, how could we expect a robot to do what we can't?
Wendell Wallach says that's the limitation of such top-down approaches to conferring robot morality. While such codes might guide behavior under a limited set of conditions, a relatively simple set of rules can't provide a comprehensive sense of morality.
The alternative is what Wallach calls a bottom-up approach; Arkin refers to it as machine learning. The idea is basically to create an infantile robot capable of acquiring moral sensibilities, just as we do. That might result in a more human-like morality, but engineers have less control over the end result and there are risks inherent in that, as well. Arkin says he would favor banning machine learning in military robots, for that exact reason.
In the end, the questions of whether and how to create moral robots are, in themselves, moral dilemmas which we have yet to work out. What's your take?