Is It Ethical to Use Robots in War? Under What Circumstances? In What Roles?
In recent years, there has been an increase in the use of robots in war, including remotely controlled robots, operated in some cases by military personnel on a different continent from the machine itself. And there is the very real prospect of autonomous robots operating according to programming, making their own “decisions.”
What are the ethical implications of utilizing remotely operated weapons, including planes, bulldozers, and robots, that do not put the operator at any personal risk? What rules should or could apply to autonomous robots — or should they be banned altogether? What laws apply to the use of drones and autonomous robots? Can a robot conform to international law, including the Geneva Conventions?
Popular culture has long shown ambivalence towards military robots, especially autonomous robots. Films, books, and TV shows ranging from the humorous, e.g., "Short Circuit" (1986), to the more contemplative, e.g., Asimov’s "I, Robot" (1950), have explored some of the potential pitfalls of autonomous machines and delegating the fighting of war to machines and computer programming. We are now beginning to face some of the ethical issues raised in these works.
Military robotics is of particular interest to us at Brandeis because of the large role our neighbors in Waltham and nearby towns play in the industry:
Right here in Waltham, Foster-Miller, a division of British firm Qinetiq, makes military robots; Boston Dynamics is developing potential military robots, noting on its website that “Organizations worldwide, from DARPA, the US Army, Navy and Marine Corps to Sony Corporation turn to Boston Dynamics for advice and for help creating the most advanced robots on Earth”; and Raytheon, with global headquarters in Waltham, is developing the exoskeleton “wearable robot” at a research facility (SARCOS subsidiary) in Salt Lake City, Utah: it increases wearer’s strength and endurance, is described as a “robotic suit for the soldier of tomorrow” and looks like something out of Spider-Man or Avatar. Elsewhere in Eastern Massachusetts, iRobot — based in Burlington — doesn’t just make vacuum cleaners, but also machines for military use; and MIT, in Cambridge, is involved in relevant research, including Boston Dynamics' "SquishBot."
Uses of Robots by the Military
Robots are in use and in development for many military roles. Current and planned uses include:
-
Assisting wounded fighters or civilians (“Bear Robot Rescues Wounded Troops,” BBC News/Health, 6/7/07).
-
Protecting or assisting warfighters (“Using Robots in Iraq to Make Missions Safer” — NPR “Talk of the Nation Science Friday” 6/23/06). The U.S. Defense Advanced Research Projects Agency (DARPA) notes on its website in a section titled “Autonomous Ground Vehicles: A Driving Force” that it is “pushing the envelope to develop ground robotic systems to assist on the battlefield while navigating safely and autonomously.”
-
Observing or attacking targets remotely, reaching areas that may otherwise be inaccessible, while keeping military personnel (the operators) away from harm. Unmanned Aerial Vehicles (UAVs), now most commonly called drones, were used as early as the Civil War, in crude form, as a history of UAV’s from the U.S. Department of Defense outlines. Their use has greatly increased in recent years. Israel has remote-operated Caterpillar bulldozers, used recently in the 2008-09 conflict in Gaza, and recently introduced a new drone “… that can remain airborne for more than 24 hours and reach as far as Iran … to the Israeli air force's arsenal. ...” British firm BAE Systems has a product known as the "MANTIS," an unmanned aircraft system that can choose targets on its own, and “… can execute its mission with a much reduced need for human intervention by understanding and reacting to its environment.” And the U.S. Marine Corps is developing “Tactical Unmanned Ground Vehicles” such as the “Gladiator.”
What Does this Mean for the Laws of War?
Many challenging questions arise in connection with the use of these technologies. What laws or regulations apply, if any? Set by whom? What are the implications of the use of these technologies for the laws of war, as established in agreements such as the Geneva Conventions? Do these agreements and laws apply to these technologies?
robot toyPerhaps the most often-cited model for laws regulating autonomous robots is from author Isaac Asimov’s “Three Laws of Robotics,” which address issues such as not harming humans and obeying orders given by humans. (Incidentally, the Oxford English Dictionary cites Asimov's as the first to use of the word “robotic” (subscription may be required). Yet these rules were designed as a literary device for works of science fiction. Laws designed for contemporary reality are largely absent.
In his paper “How Just Could a Robot War Be?” (pdf), Peter M. Asaro, of the HUMlab & Department of Philosophy and the Umeå University Center for Cultural Analysis at Rutgers University, reviews “… considers how robots, “smart” bombs, and other autonomous technologies might challenge the principles of just war theory, and how international law might be designed to regulate them.” Asaro concludes that “… deep contradictions arise in the principles intended to govern warfare and our intuitions regarding the application of autonomous technologies to war fighting.”
P.W. Singer, author of "Wired for War: The Robotics Revolution and Conflict in the 21st Century," and Senior Fellow and Director of the 21st Century Defense Initiative at the Brookings Institution, was asked in an interview “How will robot warfare change our international laws of war? If an autonomous robot mistakenly takes out 20 little girls playing soccer in the street and people are outraged, is the programmer going to get the blame? The manufacturer? The commander who sent in the robot fleet?” His reply:
"I went around trying to get the answer to this sort of question meeting with people not only in the military but also in the International Committee of the Red Cross and Human Rights Watch. We're at a loss as to how to answer that question right now. The robotics companies are only thinking in terms of product liability ... and international law is simply overwhelmed or basically ignorant of this technology. There's a great scene in the book where two senior leaders within Human Rights Watch get in an argument in front of me of which laws might be most useful in such a situation. One's bringing up the Geneva Conventions and the other one's pointing to the Star Trek Prime Directive."
Singer’s take is that “… the field of robotics, it's a very young field. It's not like medicine that has an ethical code. It's not done what the field of genetics has, where it's begun to wrestle with the ethics of what they're working on and the ripple effects it has on the society. That's not happening in the robotics field, except in isolated instances.”
More from Singer: a brief lecture and an NPR interview.
There have been efforts to rectify the situation through laws (New Scientist 9/30/09: "Campaign Asks for International Treaty to Limit War Robots") and through technology such as efforts by robotics engineer Ronald C. Arkin at the Georgia Institute of Technology, Atlanta, to develop “… an 'ethical governor,' which aims to ensure that robot attack aircraft behave ethically in combat, and is demonstrating the system in simulations based on recent campaigns by U.S. troops, using real maps from the Middle East.” In an detailed, 117-page article, "Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture" (pdf), Arkin "provides the basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement."
In "Humanoid Robotics: Ethical Considerations," the Idaho National Laboratory (operated for the U.S. Department of Energy's Office of Nuclear Energy by Battelle Energy Alliance) suggests that we must address this now, “… when there are only a handful of functional humanoids around the world — that we must decide the direction in which to push. Humanoids are the products of our own minds and hands. Neither we, nor our creations, stand outside the natural world, but rather are an integral part of its unfolding. We have designed humanoids to model and extend aspects of ourselves and, if we fear them, it is because we fear ourselves.”
The U.S. Department of Navy's Office of Naval Research funded a preliminary report by faculty researchers of California Polytechnic State University, titled “Autonomous Military Robots: Risk, Ethics, and Design” (pdf), released in February 2009, that "addresses a range of issues, including: current and predicted states of robotics, different programming approaches (top-down, bottom-up, etc.), just-war challenges, legal responsibility, and other ethical concerns from accidental deaths to proliferation to robot rights."
In a BBC World Service piece, "Could Robots Replace Soldiers on the Battlefield?" Professor Noel Sharkey of the University of Sheffield in the United Kingdom, and co-founder of the International Committee for Robot Arms Control, raises the question of responsibility for the actions of an autonomous robot: "Humans can be held accountable; machines can't," he says. "... Supposing I have an autonomous robot, that's wandering 'round the battlefield, finding someone, and killing them. Who's responsible for that? Is the robot held accountable? How can I punish the robot? Can I switch it off? Does it care?" He also calls for ethics guidelines for robots .
Sharkey has also “... expressed his concerns that we are beginning to see the first steps towards an international robot arms race. … that it may not be long before robots become a standard terrorist weapon to replace the suicide bomber.”
Some scientists refuse military funding for their robotics research, out of concern for the uses to which their research might be put.
There are skeptics who believe these and related concerns are overblown. Dennis Gormley, a senior fellow at The James Martin Center for Nonproliferation Studies's Washington, D.C., office and a specialist in missile systems, “… says that worries over robots-run-amok ignore the realities of military and terrorist decision making. He notes that Air Force officials in particular tend to drag their heels on technologies that might make their pilots appear obsolete. He says that would-be terrorists could potentially deliver up to several hundred pounds of explosives by converting a build-it-yourself airplane into a UAV, but adds that the conversion would require several years of technically challenging work. 'Frankly,' he says, 'I think that's beyond the capacity of any terrorist group.'”
Some Other Ethical Issues
Other ethical quandaries raised by the development and implementation of these technologies include:
-
Should machines be allowed to make life or death decisions autonomously?
-
Is fighting a war by “remote control” lessening the connection of those conducting the war to the impact of their actions? (See Frontline's “Taking Out the Taliban: Home for Dinner.”)
-
With war fighting becoming increasingly like operating a videogame, what are the implications ofAmerica's Army 3 wallpaper image using a videogame as a recruiting tool? (For an example of this type of recruiting, see “America’s Army,” the official Army videogame.)
Conclusion
Technology is moving fast. The desire to protect both military personnel and civilians, as well as to fight more effectively, is putting us in territory with implications we cannot fully anticipate and may not have adequately considered.