An Ethical Killing Machine?

So, I’m a big big fan of the TERMINATOR franchise. What’s not to love? Fate, time travel, killer robots, and the near-extinction of the human race in a thermo-nuclear Judgment Day…

But part of the reason I like the franchise is that it’s fiction…at least for now.

It won’t be (well, not exactly) if certain well-meaning scientists have their way.

“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” said Ronald C. Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the Army. “That’s the case I make.”

Robot drones, mine detectors and sensing devices are already common on the battlefield but are controlled by humans. Many of the drones in Iraq and Afghanistan are operated from a command post in Nevada. Dr. Arkin is talking about true robots operating autonomously, on their own.

He and others say that the technology to make lethal autonomous robots is inexpensive and proliferating, and that the advent of these robots on the battlefield is only a matter of time. That means, they say, it is time for people to start talking about whether this technology is something they want to embrace. “The important thing is not to be blind to it,” Dr. Arkin said. Noel Sharkey, a computer scientist at the University of Sheffield in Britain, wrote last year in the journal Innovative Technology for Computer Professionals that “this is not a ‘Terminator’-style science fiction but grim reality.”

He said South Korea and Israel were among countries already deploying armed robot border guards. In an interview, he said there was “a headlong rush” to develop battlefield robots that make their own decisions about when to attack.

Hmmm…Well-meaning scientists? Headlong rush? Killer robots making their own decisions? Sure sounds “‘Terminator’-style” to me.

Now, I’m being a bit silly here, but I genuinely am concerned about autonomous robots that kill. I’m even more concerned that someday true ‘thinking machines’ may be developed and that they won’t feel like they really need us around anymore.

In the TERMINATOR story, SkyNet became self-aware and decided that human beings were a threat to it and launched nukes to get rid of us. Now, I’m not saying that I agree with SkyNet’s decision, but I can see where the computer was coming from: man can be an irrational, violent creature who would sooner destroy something that frightens us rather than try to first understand it. If a computer I built to control military forces and decisions suddenly became self-aware I can’t say I wouldn’t pull the plug first and ask questions later. So the idea of a self-aware computer turning on us doesn’t sound that far-fetched to me.

But, in the short-term, consider:

In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.

His report drew on a 2006 survey by the surgeon general of the Army, which found that fewer than half of soldiers and marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents. More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior.

Troops who were stressed, angry, anxious or mourning lost colleagues or who had handled dead bodies were more likely to say they had mistreated civilian noncombatants, the survey said. (The survey can be read by searching for 1117mhatreport at www.globalpolicy.org.)

“It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield,” Dr. Arkin wrote in his report, “but I am convinced that they can perform more ethically than human soldiers are capable of.”

Hmmm…Again, problems.

Two big questions leap to mind: when these robots do fail to ‘act ethically’ (as they will, by the designers own admission), who should be held responsible? And should we really be making it easier and cleaner for human beings to wage war?

Can you imagine any human being held responsible for the misdeeds of a combat droid? Who do you charge–the unit commander, the designer, the engineers who build and service the thing? Someone else? Don’t you think it will be a simple matter to chalk it up to computer glitch, express profound regret, and go back to business as usual?

I’m going to assume (and I don’t think that this a stretch) that any army which deploys these combat droids will quickly find them an indispensable war-fighting tool. Look at the use of Predator drones in Iraq and Afghanistan–how many weddings and innocent civilian gatherings have those drone accidentally bombed. Has anyone been held liable? Would the military ever give them up? I believe the answer on both counts is ‘No’.

And, more worrisome, won’t these droids just make it easier and cleaner for humans (particularly in the rich and technologically advanced militaries of Western powers) to wage war? We in the West rightly have an abhorrence of military casualties in the conflicts our soldiers are involved in, but do we have enough of an abhorrence of ‘enemy’ or civilian casualties or of war in general?

I don’t think so.

War is a terrible, dehumanizing enterprise for both combatants and civilians–you only need look at the results of Dr. Arkin’s own report to see this. I don’t mean to belittle the very real mental and emotional suffering of front line troops, but do we want to disengage them from combat in a way that makes it a point-and-click video game form of violence? Doesn’t it then become all too easy to forget the humanity of the person on the other end of that rifle barrel or make it too easy to kill from a safe distance?

Saving our own soldiers lives by using combat droids means it’s easier for these machines to go kill more of the ‘enemy’, and surely civilians and non-combatants amongst them.

Anyway, I think I need to stop there. I’m getting too worked up for a pacifist 🙂 I think perhaps I should channel some of this anger at human foolishness into a story of some kind…

– S.

2 thoughts on “An Ethical Killing Machine?

  1. I think you’re ignoring the biggest question of all, Steve. What if these thinking machines use their ethics programs and decide on the basis of some utilitarian calculus that the enemy they’ve been sent to destroy is more ethically correct than ‘our side’?

    I’ve heard of ‘home grown’ terrorists, but this is the first time I’ve considered ‘home built.

    I disagree on your Terminator comparison, though. There’s just no way these things will look at all like Arnold. If afternoons watching robot wars have taught me anything, the most effective robot terrorists (Mujahmachines?)will be wedge shaped.

    The other possibility, of course, is that they will come to the ethical conclusion that it can never be right for a robot to kill a person. In which case we should probably be arming our border guards with EMPs, because Canada is their next likely stop.

Comments are closed.