[ weird things ] | how to disassemble our machine overlords

how to disassemble our machine overlords

Tech evangelists seem really worried about making friends with robots and forgetting that we can always override their programming.
killer robot humor

While going through the backlog of things I’ve been meaning to cover, I found this three year old article about cyberneticist and Singularitarian Kevin Warwick, and how he plans to outsmart future artificial intelligences, vastly enhanced cyborgs, and other technological oddities which sound as if they were pulled from The Matrix or maybe an updated version of Bladerunner. Warwick is an interesting, if not an odd character in the world of computer science, and some of his experiments which involve various implants connecting him with all sorts of electronic devices make for an interesting splash in the press, especially because they’re less dangerous or profound than, say, the lamprey brain controlled robot, but still kind of neat and really make you wonder about how easy it would be to operate machines by through or emotion in the very near future. So what always perplexed me about Warwick’s attitude regarding the future is how he goes from a media friendly implant that lets him open doors remotely, or make what is essentially an updated version of a mood ring, to a potentially dystopian world in which humans untouched by cybernetic implants are held in very low regard.

It would be easy to just point to the story’s writer and say that he’s quoting the more colorful bits of Warwick’s musings about the future, but the cyberneticist has been quite consistent in predicting that cyborgs will soon become the norm, and over the years has repeated this scenario whenever he’s asked about AI, conflating cyborgs with intelligent machines, perhaps thinking that cyborgs would eventually be wired to some immense supercomputers capable of sapient thought and far-reaching decision making. And his thoughts fall squarely in line with those Singularitarians who are busily writing reams of papers about how to make a friendly and cooperative artificial intelligence lest we be regarded by these machines as obsolete and be exterminated by their thoughtlessness or outright hostility, justifying their caution with musings from popular science, which tends to drastically simplify that actually happens around us and is often out of date with scientific research. It really is difficult to give all the ins, outs, and nuances of a particular field to which people can devote their lives in relatively short articles intended for as wide of a slice of the public as possible, and major omissions in this mode of scientific communication are absolutely unavoidable. But pop sci articles usually made with strongly implied hints that there’s far more to the story and one shouldn’t rush to take what was in Popular Mechanics or New Scientist as holy writ or a peer-reviewed body of work, like the kind you see in Nature or IEEE.

Strangely enough, Singularitarins seem to have taken too many metaphors about humans brains and cutting edge computers, tales about how brains have triumphed over brawn in nature, and praises for research that seems promising but is very far from complete or practical, and decided to weave it into a tale of humans at a crossroads, about to create something far smarter than themselves and in need of a strategy to cope with the unruly creation they no longer understand. Fortunately, things don’t work that way in the real world, and those of us who write code and work on massive systems which can be so complex that they seem to have a mind of their own at times, actually have a very good understanding of how they’re built and if we don’t, it’s because we weren’t given the proper documentation, not because the system somehow outgrew its original design by suddenly deciding to modify large chunks its source code in an effort to self-optimize. And even when we deal with big blocks of auto-generated code, like the kind used to provide detailed instructions for how to serialize objects sent between web services, there’s always a pattern we can find and reverse-engineer. It looks really overwhelming at first, especially when you have no experience with the system itself and find yourself slowly dredging through tens of thousands of lines of dense syntactic mazes, but once you learn enough to dial in a command to generate such files, everything quickly starts to make perfect sense.

One of the most important things to keep in mind about machines is that they’re our tools. They can be highly complicated, specialized tools only built by certain specialists, but tools nonetheless. As we keep trying to do more and more with them, we make their functions more encapsulated and abstract, which can be very scary if you’re not familiar with the details of how they function. But we have books which specify where every node and every transistor on a circuit board can be found, what it does, and how it does it, while computer science programs still teach you how to do the kind of math computers do in your head so when they make mistakes, you can perform their calculations on a piece of paper and track where in the process they get it wrong. That’s why unlike Warwick and the Singularitarians, I’m not afraid of a robot uprising or being displaced by waves of cyborgs for whom humans are a sub-species (whatever that means). I’ve seen how machines are built and I know how to change what they do and how they do it, and I’m just one of the millions of people with the kind of practice and education required for that. And it’s very hard to be apprehensive of something that you, and quite literally millions of others know how to quickly take apart and rebuild to do your bidding…

# tech // artificial intelligence / computer science / dystopia / futurism


  Show Comments