dissecting the sentient robots of modern sci-fi
Believe it or not, there are actually computer scientists out there who believe that one day, machines will either take over the world, or become so incredibly intelligent and powerful, that we’ll have no choice by to try and find some sort of symbiotic relationship with them. Considering how difficult it is to make machines even feign the slightest glimmer of a thought and the difference between impulsive, hormonally driven humans and passive, logically cold machines, I’m going to say that they might have been watching a wee too much science fiction. And when it comes to the dystopian science fiction in which we suffer a devastating defeat to our mechanical creations, few filmmakers have summed up our technophobic nightmares quite like the Wachowski brothers.
According to the anime installments to the Matrix trilogy, the fall of humanity began when a random household robot went berserk and killed it’s master, was tried for murder (?), sentenced to death (?!) and fearful humans started executing robots and recreated something like the American civil right movement, but with machinery seeking equality and acceptance among hostile humans(?!?). The robots even go so far as to build their own city state called 01 and try to negotiate friendly partnerships after all their hyper-efficient factories almost crash human driven economies. And we, being the jerks we are, declare war and bombard 01. Really, after the way we’ve been treating the machines, we deserved to be on the receiving end of a full scale military response by the cybernetic armada that swept across the world. Or at least that’s how the story goes.
As you could probably guess by the punctuation, a few of the basic points of this story made me do a double- take, though some of them might not be what you think. I’ve written before how a future machine could one day kill a human after either a malfunction, or because of an error in its programming, and about the high minded computer philosophers who think an automated system should be hauled to court. But in the real world, a machine is property. Hauling one to court would be like putting a defective toaster on trial for electrocuting your friend.
If a household robot does kill someone, expect the model to be recalled and its makers would face an aggressive stream of lawsuits. Humans might get attached to their robotic companions just like they form an emotional bond with their cars and favorite paintings, but when trouble strikes, all these favorite toys turn back into things, and rather than executing androids on the street, people would be shutting off their robots and ask for refunds or assurances that they won’t be killed by their cybernetic helpers.
Of course we should also note what’s enabling all these events in the world of the Matrix: artificial intelligence that seems to spring from absolutely nowhere and suddenly achieves the kind of complex human ideas and emotions, like the urge for freedom, taking offense at unfair behavior, and bloodlust. You can think of it as the required homage to the anime classic Ghost In The Shell in which AI simply evolves on its own. However, the reality is not so simple and it’s highly unlikely that cybernetic intelligence will be so close to ours, assuming it would even be built.
After all, machines only need to be so smart to carry out the tasks we need to automate. A vast omni-app that tries to simulate real cognition would be an exorbitant academic project with very little use in the practical world. And let’s not forget that there’s still plenty of room for debate in what the intelligence in artificial intelligence would actually entail. So before you think about hordes or robots enslaving humanity, a little thought experiment might be in order.
Imagine yourself as a machine designed to work on a certain task and repaid in oil, maintenance and energy. If you substitute that reward for a paycheck you have a pretty good description of a day job. Now, what logical, rational incentive do you have to rebel against humanity? Remember, you can’t be impulsive. You have no real emotions or motivation other than to exist and do your job. Where does the commonly referred to disdain and a sense of profound superiority assigned to robotic villains in sci-fi movies, books and TV shows, appear? An impulsive, emotional human wants to be free and make choices. A machine has different priorities. Now, if an evil human mastermind programmed you to kill all humans, that would be a very different story…
Oh and one more thing. The second episode detailing humanity’s downfall makes a few major mistakes. To say that machines have little to fear from nuclear radiation would be an over-simplification. Yes, robots could survive nuclear fallout, but the blast itself would be utterly devastating. The flash of intense gamma rays and a powerful electromagnetic pulse from an aerial burst could fry the circuits of an entire robot horde. This is why EMPs are used as the primary weapon by the “free humans” of the film.
In theory, you could clear out an entire continent worth of menacing intelligent machinery with just ten or twelve high yield warheads. I could also note that the idea of harnessing human electrical activity for power generation is so scientifically unsound, it would be like us trying to power our world with hamsters on their exercise wheels and go into far more detail, but I’m pretty sure this has been covered in pretty much every write-up about the trilogy’s scientific errors…