[ weird things ] | waiting for our post-singularity robot overlords

waiting for our post-singularity robot overlords

Singularitarian arguments for the seeming inevitability of artificial super-intelligent are little more than wild extrapolations of pop sci cliches.
retro giant robot
Illustration from a poster by Zelia

Here’s an important note about transhumanism and Singularitarians. While the two ideas get tied at the hip by bloggers and reporters because Kurzweil, the man they turn to in regards to both, embraces both concepts with equal enthusiasm, one doesn’t need to be a Singularitarian to be a transhumanist. And a major focus for the former is the supposedly inevitable advancement of artificial intelligence to a superhuman level, a notion revered in the canon of the Singularity as a kind of Rapture during which clever machines take over from their human masters and remake the world in their own image.

Since these clever machines are to be much smarter than their former human masters, a fair bit of Singularitarian bandwidth gets devoted to the idea of how to make sure that the coming machine overlords are friendly and like working with humans, often resulting in papers that frankly don’t make much sense to yours truly for reasons covered previously. Yes, we don’t want runaway machines deciding that we really don’t need all that much electricity or water, but we’re probably not going to have to worry about random super-smart computers raising an army to dispose of us.

Keeping in mind the Singularitarian thought process, let’s take a look at a more general-level post written by someone most of my long time readers will likely recognize, the Singularity Institute’s Michael Anissimov. It’s basically a rumination on the challenges of corralling the coming superhuman intelligence explosion and as it floats off into the hypothetical future, it manages to hit all the high notes I’m used to hearing about AI and the kind of awkward shoehorning of evolution into technology we often get from pop sci evangelists. Right off the bat, Michael recycles a canard about human prehistory we now know to be rather inaccurate, framing a rise in intelligence in modern humans as our key to domination over all those other hominid species tens of thousands of years ago and trying to present us as the next Neanderthals who will eventually face far smarter and much more competitive superhuman robots who can outthink us in a millisecond…

Intelligence is the most powerful force in the universe that we know of, obviously the creation of a higher form of intelligence/power would represent a tremendous threat/opportunity to the lesser intelligences that come before it, and whose survival depends on the whims of the greater [form of] intelligence/power. The same thing happened with humans and the “lesser” hominids that we eliminated on the way to becoming the number one species on the planet.

Actually, about that. Modern humans didn’t so much eliminate all our competitors when we slowly made it up to the Middle East from North Africa and spread to Europe and Asia after the Toba eruption, as much as we outcompeted them and interbred with them since we were close enough biologically to hybridize. In Europe, modern humans didn’t slay the Neanderthals and push them out to the Atlantic where they eventually died of starvation and low birth rates, as the popular narrative goes. We’re actually part Neanderthal, who by the way, weren’t hulking brutes of limited intelligence but quite clever hunters who showed signs of having rituals and appreciated tools and basic decorations.

Modern humans seem to be more creative and curious, qualities a simple side by side comparison between us and our extinct or absorbed evolutionary cousins wouldn’t show as signs of super-intelligence, and we had a more varied diet which was beneficial during harsh times. And as we move away from the typical half-guesses popularized around a century ago, we should be getting an appreciation of how complex and multifaceted cognition actually is and that our intelligence isn’t measured in discrete levels that determined which hominids lived and which died. Just like all new branches of the tree of life, humans as we know them are hybrid creatures representing a long period of evolutionary churn.

So where does this leave the narrative of the next big leap in intelligence, superhuman machinery? Well, not on such a firm ground. I’ve consistently asked for definitions of superhumanly intelligent machines and all of them seem to come down to doing everything humans do but faster, which seems like a better way to judge the intelligence of a clichéd “genius” on TV than actual cognitive skill. How fast you can solve a puzzle isn’t an indication of how smart you are. That’s demonstrated by your ability to solve a puzzle. I know there are some tasks with which I tend to slow down and take my time to make sure I get them done right. Does it mean that someone who performs the exact same task just as well in half the time is twice as smart as I am even if we came up with the same exact results?

According to some Singularitarians, yes. And what role does creativity play in all this? Some humans are highly inventive and constantly brimming with ideas. Others couldn’t even guess where to start modifying the dullest and simplest piece of paperwork. But yet, somehow, with a future computer array, say Singularitarians, our computers will have all that covered and their creativity can take very sinister turns, turns that read as if they were lifted out of a Steven King novel. In his post, Michael quotes oft- cited theorist Stephen Omohundro on the potentially nefarious nature of goal-driven AI…

Surely no harm could come from building a chess-playing robot, could it? In this paper we argue that such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.

Pardon me, but who the hell is building a non-military robot that refuses to have itself shut off and tries to act like a virus to randomly reproduce? That’s not a chess-playing robot! That’s a freaking berserker drone on a rampage and requiring a violent intervention to stop! And here’s the thing, since I actually write code, I know that without me specifying how to avoid being shut down, robots can be turned off with a simple switch. For a machine to resist being turned off, it would have to modify its BIOS settings and programmatically override all commands for it to shut down. And since all the actions the robot was assigned, or learned by ANNs or some sort of genetic algorithms designed to gauge its performance at a task, take place at the application layer, a layer which interfaces with the hardware through a kernel with various drivers and sees the actual body of the robot as a series of abstractions, it wouldn’t even know about the BIOS settings without us telling it how to go and access them. I’d be a lot more afraid of the programmer than of the robot in Omohundro’s scenario since this kind of coding could easily get people killed with no AI involved. So if anything, the example of a nefarious bot we’re given above is actually backwards. Without special instructions allowing it to resist human actions, we could always turn the machine off and do whatever we want with it.

I’ve said it before and I’ll say again. Artificial intelligence is not going to simply transcend human thought on a schedule and the very best case scenario we can expect is a helpful supercomputer like GERTY, seen in the movie Moon. As its components would be pieced together, we’d know how it’s made and what it can do. Even the most sophisticated ANNs and genetic algorithms would still require training and could be analyzed after every iteration to see how things were coming along. All this talk of AGIs just deciding to modify their source code only because they suddenly could by virtue of some unnamed future mechanisms, ignores most of the basic fundamentals of what code is, what code does, and how it’s ultimately compiled to run executables.

To make all this even more bizarre, Omohundro is an expert in computer science, yet what I’ve seen of his AGI-related work throws up red flag after red flag to me. It’s great that he was one of the developers of *Lisp and tried to merge functional languages and OOP in Sather, but that wasn’t exactly exactly in the recent past, and what he says about artificial intelligence sounds more like his wish list from the late 1980s and early 1990s than how and towards what the field is actually progressing today. And it may be worth considering that black boxes with whatever technology you need magically in it with no documentation or engineers to consult about its basics, isn’t a great premise for a paper on AI, especially when you’re trying to look into the near future.

# tech // artificial intelligence / computer science / futurism / technological singularity


  Show Comments