trying to ban killer robots before they arrive

November 23, 2012

x47b takeoff

Human Rights Watch has seen the future of warfare and they don’t like it, not one bit. It’s pretty much inevitable that machines will be doing more and more fighting because they’re cheap and when one of them is destroyed by enemy fire, no one has to lose a father or a mother. Another one will be rolled off the assembly line and thrown into the fray. But the problem, according to a lengthy report by HRW, is that robots couldn’t tell civilians from enemy combatants during a war, and so humans should be the ones deciding who gets killed and who doesn’t. Today being able to distinguish civilians from hostiles is absolutely crucial because most wars being fought today are asymmetric and often involve complex, loosely affiliated groups which move through a civilian population and recruit civilians or so-called "non-state actors" to join them. How do you tell the difference, especially when you’re just a collection of circuits running code?

Just as HRW warns in its grandly titled report, robots left to make all the decisions could easily turn into indiscriminate killers, butchering everyone in sight and no human would be accountable for their actions because one could always blame a bug or lack of testing in real world situations on what could all too easily become a war crime. But considering that humans have a hard time telling who is on whose side in Afghanistan and faced the same problem in Iraq by keeping the country together until the population decided to come down hard on the worst of the sectarian militias, how well would a robot fare? HRW may be asking for an impossible goal here: to make a robot better at telling civilians apart from combatants than humans who spend years learning to do that. Of course as a computer person, I’m intrigued by the idea, but the only viable possibility that I see is to keep the entire population under constant surveillance, log their every movement, word, key stroke, and nervous tick, and parse the resulting oceans of data for patterns.

But how would that look? Excuse us, mind if we’d wire your building as if we’re shooting a reality show, install spyware on your computer, and tap your phones to record everything you say and do so our supercomputer doesn’t tell a drone to lob a 1,000 pound warhead through your living room window? Something tells me that’s not a viable plan, and even then, mistakes could easily be made by both humans and robots since our intra-cultural interactions are very complex and hard to interpret with certainty. And again, we already spy on people and still mistakes are made so it’s doubtful this technique would help, especially when we consider just how much data would come pouring in. Really, it all comes down to the fact that war is terrible and people get killed in armed conflicts. Mistakes can and will inevitably be made, robots or no robots, and asking that a nation looking to automate its mechanized infantry and air force keep on risking humans is like yelling into the wind. The only way civilians will be spared is if wars are prevented but preventing wars is a task at which we’ve been spectacularly failing for thousands of years…

Share
  • dar norris

    By coincidence I just read an interview with Dennis Kucinich on Gizmodo about the current state of drone warfare. Alarming to say the least.

  • Brett

    I think HRW has got it backwards. If we can drastically improve pattern recognition and intelligence in our drones, then arguably the drones will be more “geneva conventions”-friendly, not less. You could engineer them so that they always follow “humanitarian” rules of war, whereas humans can and do often screw up in the heat of the situation.