how not to set this bot to kill all humans

January 8, 2011

Here are some good news on the tech skepticism front. Popular science writers are no longer taking the idea of human-level AI going rogue and wiping out humans in an indeterminate future. Now the bad news. They’re still hyping the threats from military machines, which while real, aren’t quite as severe as they’re being made out to be and are pretty much always the result of bugs in the code. We’re starting to turn to robots for more and more on the battlefield and those robots can and will get smarter, reacting to threats faster than humans, attacking their targets with greater efficiency than even computer-aided pilots. Being expendable, they’re a lot less emotionally and politically expensive to lose than humans, so the more robots we build, the less we will have to get involved in the actual fighting, and the more damage we can do remotely. However, machines are indiscriminate and even the best programmers will make mistakes. There will be accidents and civilians can still be harmed during a shootout between enemy forces and a squad of robots. And that worries tech writers and experts in AI, especially because so far, there’s no plan for coordinating current and future killer bots.

Today, there are few places where we can get a better glimpse of the future than in military aviation, where the rumor is that the last fighter pilot has already been born. In less than half a century, most fighter and bomber operators will be replaced by smaller, stealthy jets which fly themselves to their targets much faster than they could with a human on board, and carrying a greater payload since they’re not weighed down with redundant, space-consuming, and heavy life support systems. In experimental flights or simulations, this sounds great, but in the real world, how will they operate in groups? How will they communicate without human handlers or decide how to allocate targets among each other? When they’re screaming towards a target at Mach 2.5 and readying to drop a bomb, how long of a time should humans have to intervene? There’s no guideline for this, and considering that the military usually seems to have a 30 page manual spelling out every step for, oh just about everything, that may seem a little disconcerting. However, all this technology is still brand new and not exactly ready to deploy en masse. This is why in the Popular Science article linked above, the anecdote of the engaged Pentagon official who’s wondering about the protocols for mass deployment of robot soldiers gives the very misleading impression that no one’s really worried about how to control military AI.

Of course that’s not really true. Runaway, armed robots who seem to go rogue when they either loose targets or have a lapse in communication, assuming a default behavior to "fail gracefully" as programmers say, are a very real concern, and so is the need to coordinate entire squads of them and be able to intervene when they start taking the wrong course of action mid-combat. But by focusing on all the things that could go wrong and ignoring the fact that these are all just prototypes being tested and fine-tuned, tech writers trying to find a new, more plausible robot insurrection story amp up the existing concerns while making it seem like no one takes them seriously. What policy on wartime AI can we expect from the Pentagon when the AI in question is still an experiment taking its baby steps into the real world? Now, when we have a real, working weapon ready to be assigned to an actual mission completely on its own, with humans only in the role of supervisors who’ll take control during an emergency, then we can start thinking of meaningful ways to coordinate robotic armies and fleets. Without the finished product in place and a detailed knowledge of how it works and what it could do, a far-reaching policy on cybernetic warfare would be putting the cart before the horse. Knowing the capabilities of an unmanned fighter, bomber, or tank would let you create new requirements for the vendors and specify a communications package that will let all the different units communicate their positions and actions.

And there’s another interesting twist here. Deploying individual robots that talk to one another would require a supercomputer to issue commands across the battlefield, controlling these AIs with even more AI logic. Our somewhat inefficient method of communication which requires us to actually write or say something, simply couldn’t keep up with the milliseconds it takes for compatible computer systems to exchange vital data. This means that at some level, there’s always a computer making a crucial decision, even if the humans issue all the important strategic orders. We just wouldn’t be fast enough to assign every target and every motion when the battle is underway to prevent a robot from straying off target or getting a bit too close to an ally position. No matter how many layers of computers will be involved, however, we all know that all it takes is an override or a proper command to freeze machines in their tracks. All we need is to program enough fail-safe mechanisms, and any potential SkyNet would be disabled just by switching the power switch to off. Unless there’s a virus in the system planted there by a human, but that’s a whole other, and probably very complicated, story…

Share
  • Alexander Kruel

    There are people who take this topic very seriously.

  • Pierce R. Butler

    As if human troops never go “out of control“…

  • http://www.meetup.com/london-futurists Richie

    excellent post Mr Fish

    “All we need is to program enough fail-safe mechanisms, and any potential SkyNet would be disabled just by switching the power switch to off. Unless there’s a virus in the system planted there by a human, but that’s a whole other, and probably very complicated, story…”

    thats a hint to the writers of the terminator franchise if ever I heard it.

  • http://jmerton.blogspot.com J. Merton

    Skynet is not fiction. There really is a Skynet already operating in the UK.

  • Paul

    Of course, most combat “Robots” are robots in the way that “Robot War” TV show “Robots” are robots. Ie, remote controlled. Talk of AI is somewhat premature. Seriously, we need to start slapping people who call these things “Robots”. It’s stupid. An RC vehicle is not a robot. Even the Spirit and Opportunity rovers are not robots.

    Either that or we need a new word for actual robots.

  • Greg Fish

    “An RC vehicle is not a robot.”

    The word robot actually comes from the Czech word robota which is very close to the Russian работа, meaning “work” or “job,” and was first used in a sci-fi play. In today’s common usage, it’s meant to identify any sort of machine that performs a particular job in a human’s place.

    So yes, even RC machines are valid to call robots. Now, they may not be endowed with logic or an AI, but they don’t need to be to qualify for the title.

  • http://www.chriswarbo.tk Warbo

    “Deploying individual robots that talk to one another would require a supercomputer to issue commands across the battlefield, controlling these AIs with even more AI logic. ”

    Small computers do not need to be coordinated by bigger computers, in the same way that life doesn’t have to be created by more complex beings. In fact, centralised control is usually a bad idea, it’s just a hell of a lot simpler to reason about than a distributed, peer-to-peer computational network.

    My concern with arming robots is that robots are still far too dumb to be used in any capacity other than intelligence gathering and transportation. The reason current units are remote controlled is that it combines these two successful areas in an elegantly simple way: your guns keep getting moved to the same place as your sensors. Other than that, modern “AI” isn’t much more than statistical inference on top of blob detection, and detecting that there is something there is much easier than determining what that thing is.