[ weird things ] | why autonomous killer robots are inevitable and why we should be pretty worried about them

why autonomous killer robots are inevitable and why we should be pretty worried about them

Let's not surgarcoat this. We will build autonomous killer robots because we have every incentive to. So let's make sure we get them right.
battle bot drone

The peaceniks at Amnesty International have been worried about killer robots for a while, so as the international community convenes in Geneva to talk about weapons of the future, they once again launched a media blitz about what they see as an urgent need to ban killer robots. In the future they envision, merciless killer bots mow down soldiers and civilians alike with virtually no human intervention, kind of like in the opening scene of the Robocop remake. In an age of vast global trade empires with far too much to lose by fighting with each other use their soldiers and war machines to tackle far-flung low intensity conflicts, in military wonk parlance, where telling a civilian apart from a combatant is no easy feat, Amnesty International raises an important issue to consider. If we build robots to kill, there’s bound to be a time when they’ll make a decision in error and end someone’s life when they shouldn’t have. Who will be held responsible? Was it a bug or a feature that it killed who it did? Could we prevent similar incidents in the future?

Having seen machines take on the role of perfect bad guys in countless sci-fi tales, I can’t help but shake the feeling that a big part of the objections to autonomous armed robots comes from the innate anxiety at the idea of being killed because some lines of code ruled you a target. It’s an uneasy feeling even for someone who works with computers every day. Algorithms are way too often buggy and screw up edge cases way too easily. Programmers rushing to meet a hard deadline will sometimes cut corners to make something work, then never go back to fix it. They mean to, but as new projects start and time gets away from them, an update breaks their code and bugs emerge seemingly out of nowhere. If you ask a roomful of programmers who did this at least a few times in their careers to raise their hands, almost all of them will. And the few who did not are lying. When this is a bug in a game or a mobile app, it’s seldom a big deal. When it’s code deployed in an active war zone, it’s going to become a major problem very quickly.

Even worse, imagine bugs in the robots’ security systems. Shoddy encryption, or lack of it, was once exploited to capture live video feeds from drones on patrol. Poorly secured APIs meant to talk to the robot mid-action could be hijacked and turn the killer bot against its handlers, and as seen in pretty much every movie ever, this turn of events never has a good ending. Even good, secure APIs might not stay that way because cybersecurity is a very lopsided game in which all the cards are heavily stacked the hackers’ favor. Security experts need to execute perfectly for every patch, update, and code change to keep their machines safe. Hackers only need to take advantage of a single slip-up or bug to gain access and do their dirty work. This is why security for killer robots’ systems could never be perfect and the only thing its creators could do is make the machine extremely hard to hack with strict code, constantly updated secure connections to its base station, and include a way to quickly reset or destroy it when it does get hacked.

Still, all of this isn’t necessarily an argument against killer robots. It’s a reminder of how serious the challenges of making them are, and they better be heeded because no matter how much it may pain pacifist groups and think tanks, these weapons are coming. While they’ll inevitably kill civilians in war zones, in the mind of a general, so do flesh and blood soldiers, and if those well trained humans with all the empathy and complex reasoning skills being human entails cannot get it right all the time, what hope do robots have? Plus, to paraphrase the late General Patton, you don’t win wars by dying for your country but by making someone does for theirs’ and what better way to do that than by substituting your live troops with machinery you don’t mind losing nearly as much in combat? I’ve covered the “ideal” scenario for how all this would work back in the early days of this blog and in subsequent years, the technology to make it all possible isn’t just growing ever more advanced, it’s practically already here. It would make little sense to just throw it all away to continue to risk human lives in war zones from a military standpoint.

And here’s another thing to think about when envisioning a world where killer robots making life or death decisions dominate the battlefield. Only advanced countries could afford to build robot armies and deploy them instead of humans in conflict. Third World states would have no choice but to rely on flesh and blood soldiers, meaning that one side loses thousands of lives fighting a vast, expendable metal swarm armed with high tech weaponry able to outflank any human-held position before its defenders even have time to react. How easy would it be to start wars when soldiers no longer need to be put at risk and the other side either would not have good enough robots or must put humans on the front lines? If today all it takes to send thousand into combat saying that they volunteered and their sacrifice won’t be in vain, how quickly will future chicken hawks vote to send the killer bots to settle disputes, often in nations where only humans will be capable of fighting back, all but assuring the robots’ swift tactical victory?

# tech // artificial intelligence / military / robots / war


  Show Comments