[ weird things ] | how not to set this bot to kill all humans

how not to set this bot to kill all humans

A lot of popular science writers who don't know how AI is developed are hyping a Skynet scenario. Don't believe them.
navy drone

Here are some good news on the tech skepticism front. Popular science writers are no longer taking the idea of human-level AI going rogue and wiping out humans in an indeterminate future. Now the bad news. They’re still hyping the threats from military machines, which while real, aren’t quite as severe as they’re being made out to be and are pretty much always the result of bugs in the code. We’re starting to turn to robots for more and more on the battlefield and those robots can and will get smarter, reacting to threats faster than humans, attacking their targets with greater efficiency than even computer-aided pilots. Being expendable, they’re a lot less emotionally and politically expensive to lose than humans, so the more robots we build, the less we will have to get involved in the actual fighting, and the more damage we can do remotely. However, machines are indiscriminate and even the best programmers will make mistakes. There will be accidents and civilians can still be harmed during a shootout between enemy forces and a squad of robots. And that worries tech writers and experts in AI, especially because so far, there’s no plan for coordinating current and future killer bots.

Today, there are few places where we can get a better glimpse of the future than in military aviation, where the rumor is that the last fighter pilot has already been born. In less than half a century, most fighter and bomber operators will be replaced by smaller, stealthy jets which fly themselves to their targets much faster than they could with a human on board, and carrying a greater payload since they’re not weighed down with redundant, space-consuming, and heavy life support systems. In experimental flights or simulations, this sounds great, but in the real world, how will they operate in groups? How will they communicate without human handlers or decide how to allocate targets among each other? When they’re screaming towards a target at Mach 2.5 and readying to drop a bomb, how long of a time should humans have to intervene? There’s no guideline for this, and considering that the military usually seems to have a 30 page manual spelling out every step for, oh just about everything, that may seem a little disconcerting. However, all this technology is still brand new and not exactly ready to deploy en masse. This is why in the Popular Science article linked above, the anecdote of the engaged Pentagon official who’s wondering about the protocols for mass deployment of robot soldiers gives the very misleading impression that no one’s really worried about how to control military AI.

Of course that’s not really true. Runaway, armed robots who seem to go rogue when they either loose targets or have a lapse in communication, assuming a default behavior to “fail gracefully” as programmers say, are a very real concern, and so is the need to coordinate entire squads of them and be able to intervene when they start taking the wrong course of action mid-combat. But by focusing on all the things that could go wrong and ignoring the fact that these are all just prototypes being tested and fine-tuned, tech writers trying to find a new, more plausible robot insurrection story amp up the existing concerns while making it seem like no one takes them seriously. What policy on wartime AI can we expect from the Pentagon when the AI in question is still an experiment taking its baby steps into the real world? Now, when we have a real, working weapon ready to be assigned to an actual mission completely on its own, with humans only in the role of supervisors who’ll take control during an emergency, then we can start thinking of meaningful ways to coordinate robotic armies and fleets. Without the finished product in place and a detailed knowledge of how it works and what it could do, a far-reaching policy on cybernetic warfare would be putting the cart before the horse. Knowing the capabilities of an unmanned fighter, bomber, or tank would let you create new requirements for the vendors and specify a communications package that will let all the different units communicate their positions and actions.

And there’s another interesting twist here. Deploying individual robots that talk to one another would require a supercomputer to issue commands across the battlefield, controlling these AIs with even more AI logic. Our somewhat inefficient method of communication which requires us to actually write or say something, simply couldn’t keep up with the milliseconds it takes for compatible computer systems to exchange vital data. This means that at some level, there’s always a computer making a crucial decision, even if the humans issue all the important strategic orders. We just wouldn’t be fast enough to assign every target and every motion when the battle is underway to prevent a robot from straying off target or getting a bit too close to an ally position. No matter how many layers of computers will be involved, however, we all know that all it takes is an override or a proper command to freeze machines in their tracks. All we need is to program enough fail-safe mechanisms, and any potential SkyNet would be disabled just by switching the power switch to off. Unless there’s a virus in the system planted there by a human, but that’s a whole other, and probably very complicated, story…

# tech // artificial intelligence / combat / killer bots / military


  Show Comments