waiting for our post-singularity robot overlords

January 21, 2011 — 12 Comments

There’s an important note about transhumanism and Singularitarians. Despite the fact that the two ideas get tied at the hip by bloggers and reporters because Kurzweil, the man they turn to in regards to both, embraces both concepts with equal enthusiasm, one doesn’t need to be a Singularitarian to be a transhumanist. And a major focus for the former is the supposedly inevitable advancement of artificial intelligence to a superhuman level, a notion revered in the canon of the Singularity as a kind of Rapture during which clever machines take over from their human masters and remake the world in their own image. Since these clever machines are to be much smarter than their former human masters, a fair bit of Singularitarian bandwidth gets devoted to the idea of how to make sure that the coming machine overlords are friendly and like working with humans, often resulting in papers that frankly don’t make much sense to yours truly for reasons covered previously. Yes, we don’t want runaway machines deciding that we really don’t need all that much electricity or water, but we’re probably not going to have to worry about random super-smart computers raising an army to dispose of us.

Keeping in mind the Singularitarian thought process, let’s take a look at a more general-level post written by someone most of my long time readers will likely recognize, the Singularity Institute’s Michael Anissimov. It’s basically a rumination on the challenges of corralling the coming superhuman intelligence explosion and as it floats off into the hypothetical future, it manages to hit all the high notes I’m used to hearing about AI and the kind of awkward shoehorning of evolution into technology we often get from pop sci evangelists. Right off the bat, Michael recycles a canard about human prehistory we now know to be rather inaccurate, framing a rise in intelligence in modern humans as our key to domination over all those other hominid species tens of thousands of years ago and trying to present us as the next Neanderthals who will eventually face far smarter and much more competitive superhuman robots who can outthink us in a millisecond…

Intelligence is the most powerful force in the universe that we know of, obviously the creation of a higher form of intelligence/power would represent a tremendous threat/opportunity to the lesser intelligences that come before it, and whose survival depends on the whims of the greater [form  of] intelligence/power. The same thing happened with humans and the “lesser” hominids that we eliminated on the way to becoming the number one species on the planet.

Actually, about that. Modern humans didn’t so much eliminate all our competitors when we slowly made it up to the Middle East from North Africa and spread to Europe and Asia after the Toba eruption, as much as we outcompeted them and interbred with them since we were close enough biologically to hybridize. In Europe, modern humans didn’t slay the Neanderthals and push them out to the Atlantic where they eventually died of starvation and low birth rates, as the popular narrative goes. We’re actually part Neanderthal, who by the way, weren’t hulking brutes of limited intelligence but quite clever hunters who showed signs of having rituals and appreciated tools and basic decorations. Modern humans seem to be more creative and curious, qualities a simple side by side comparison between us and our extinct or absorbed evolutionary cousins wouldn’t show as signs of super-intelligence, and we had a more varied diet which was beneficial during harsh times. And as we move away from the typical half-guesses popularized around a century ago, we should be getting an appreciation of how complex and multifaceted cognition actually is and that our intelligence isn’t measured in discrete levels that determined which hominids lived and which died. Just like all new branches of the tree of life, humans as we know them are hybrid creatures representing a long period of evolutionary churn.

So where does this leave the narrative of the next big leap in intelligence, superhuman machinery? Well, not on such a firm ground. I’ve consistently asked for definitions of superhumanly intelligent machines and all of them seem to come down to doing everything humans do but faster, which seems like a better way to judge the intelligence of a clichéd "genius" on TV than actual cognitive skill. How fast you can solve a puzzle isn’t an indication of how smart you are. That’s demonstrated by your ability to solve a puzzle. I know there are some tasks with which I tend to slow down and take my time to make sure I get them done right. Does it mean that someone who performs the exact same task just as well in half the time is twice as smart as I am even if we came up with the same exact results? According to some Singularitarians, yes. And what role does creativity play in all this? Some humans are highly inventive and constantly brimming with ideas. Others couldn’t even guess where to start modifying the dullest and simplest piece of paperwork. But yet, somehow, with a future computer array, say Singularitarians, our computers will have all that covered and their creativity can take very sinister turns, turns that read as if they were lifted out of a Steven King novel. In his post, Michael quotes oft- cited theorist Stephen Omohundro on the potentially nefarious nature of goal-driven AI…

Surely no harm could come from building a chess-playing robot, could it? In this paper we argue that such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.

Pardon me, but who the hell is building a non-military robot that refuses to have itself shut off and tries to act like a virus to randomly reproduce? That’s not a chess-playing robot! That’s a freaking berserker drone on a rampage and requiring a violent intervention to stop! And here’s the thing, since I actually write code, I know that without me specifying how to avoid being shut down, robots can be turned off with a simple switch. For a machine to resist being turned off, it would have to modify its BIOS settings and programmatically override all commands for it to shut down. And since all the actions the robot was assigned, or learned by ANNs or some sort of genetic algorithms designed to gauge its performance at a task, take place at the application layer, a layer which interfaces with the hardware through a kernel with various drivers and sees the actual body of the robot as a series of abstractions, it wouldn’t even know about the BIOS settings without us telling it how to go and access them. I’d be a lot more afraid of the programmer than of the robot in Omohundro’s scenario since this kind of coding could easily get people killed with no AI involved. So if anything, the example of a nefarious bot we’re given above is actually backwards. Without special instructions allowing it to resist human actions, we could always turn the machine off and do whatever we want with it.

I’ve said it before and I’ll say again. Artificial intelligence is not going to simply transcend human thought on a schedule and the very best case scenario we can expect is a helpful supercomputer like GERTY, seen in the movie Moon. As its components would be pieced together, we’d know how it’s made and what it can do. Even the most sophisticated ANNs and genetic algorithms would still require training and could be analyzed after every iteration to see how things were coming along. All this talk of AGIs just deciding to modify their source code only because they suddenly could by virtue of some unnamed future mechanisms, ignores most of the basic fundamentals of what code is, what code does, and how it’s ultimately compiled to run executables. To make all this even more bizarre, Omohundro is an expert in computer science, yet what I’ve seen of his AGI- related work throws up red flag after red flag to me. It’s great that he was one of the developers of *Lisp and tried to merge functional languages and OOP in Sather, but that wasn’t exactly exactly in the recent past, and what he says about artificial intelligence sounds more like his wish list from the late 1980s and early 1990s than how and towards what the field is actually progressing today. And it may be worth considering that black boxes with whatever technology you need magically in it with no documentation or engineers to consult about its basics, isn’t a great premise for a paper on AI, especially when you’re trying to look into the near future.

[ illustration from a poster by Zelia ]

Share
  • Professor Layman

    “Surely no harm could come from building a chess-playing robot, could it? In this paper we argue that such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety.”

    Hah! This quote gave me a good laugh. I suppose it’ll quip “Check-Mate” as it throttles the life out of it’s first victim, too.

  • http://www.acceleratingfuture.com/michael/blog/ Michael Anissimov

    So never, not in a million years, will AIs become independent agents, or ever surpass humans in a general sense?

  • Greg Fish

    You know Michael, it’s getting really tiring to repeat the same questions and explain the same misconceptions every time. I really hate to pull a Searle on you, but your question doesn’t make much sense. What general sense do you mean? Computer have always surpassed humans in calculation, statistical analysis, and meta-analysis of huge data sets humans can’t even grasp. That’s what makes them such fantastic tools.

    But there’s a vast chasm between exceeding our computational abilities and suddenly conquering the entire human civilization for your grand plans just because a computer wants to. One can build a machine that can do a lot of damage on accident or purpose, but the point is that a human has to build it. It won’t build itself.

  • Russ Toelke

    I can’t see AIs becoming independent unless they breed with us or vice-versa. We will always have the on/off switch otherwise.

  • Tim

    First let me say…don’t know much about computers/AI. If your still listening, I would say one thing sticks out. Even if our computer could think like us but a million times faster, it would have the problem of dealing with a clunky physical world where access to energy, resources, & time is limited and rationed. The laws of nature are the laws of nature, you can’t instantly build and empire state building or moon rocket even it you could think of one instantly. Even if it could conceive of a plan to wipe us out in 5 seconds after “waking up”, it would take years to marshal control the neccessary resources to carry out its plan. So unless it was like the terminator movies where it has autonomous control of all our weapons systems and uses them to annhillate us, we would have plenty of time to stop it. If the automated factory designed to make automobiles stops taking human commands, we could pull the plug or dynamite the factory. Personally to turn the matrix movies on their head, our AI would probably create its on perfect virtual world where everything happens the way it wants, retire to it and leave the physical world to us ugly bags of mostly water.

  • http://wading-in.net/walkabout Just Al

    I don’t really follow AI – I find it to be far more wishful thinking than serious research, since we have so little understanding of what human intelligence is that we’re hell and gone away from replicating it. But there’s two things that strike me when I see speculations like this.

    The first is how far ahead the proponents seem able to jump without even remotely considering the intervening steps. “Look, we’re going to have self-reliant machines!” What a vast industry and undertaking that would be. Machines that mined minerals, transported them, processed them, then used them to produce more of themselves? And of course, repaired themselves as they wore out? We’re either talking very versatile individual machines the size of a city block, or a couple of dozen square miles solely of factories able to handle all of this (and the power demands as well.) Otherwise, the self-reliant machine simply thrashes around when the shoulder joint blows out, or grinds to a halt when someone cuts power.

    Even worse, however, is how amazingly stupid such AI fanatics seem to think everyone else must be, and will continue to be throughout the centuries it would take to come to this. Do any of these starry-eyed nitwits realize what kind of base “instincts,” for want of a better word, would have to be developed to make a machine not only self-reliant, but motivated to “reproduce” and particular enough to avoid “death?” But while creating all this, humans will somehow forget to program in a safety switch? Because, yeah, while making a machine super-intelligent and self-aware, nothing bad could come of that? Gosh, it’s a good thing we have your brilliant foresight to predict where this is going to end up.

    This is like issuing warnings about building kites, because when they fly us to Jupiter we won’t be able to breathe. Meanwhile, no one yet knows how to make a kite fly.

    So never, not in a million years, will AIs become independent agents, or ever surpass humans in a general sense?

    Oh, shit Greg, he’s got you there! I wasn’t expecting that kind of crushing riposte! I guess since you can’t predict a million years into mankind’s future, his arguments make sense by default. Face it, you’re out of your league with an intellect this overwhelming.

    [Play along - he thinks he's clever. He's liable to get whiny and bratty if you ruin this impression.]

  • http://www.acceleratingfuture.com/michael/blog/ Michael Anissimov

    You didn’t answer my question. :(

    Say that I and my friends DID build such a machine, a machine that could conquer us all. Then, in a million years, could it ever exist? Yes or no? I’ll explicitly build it.

  • Greg Fish

    “Say that I and my friends DID build such a machine…”

    See, again, you’re doing the very kind of black boxing I warned against in the post. Say you build this super-AGI. What technology stack will you use? What will it be able to do and what was its original goal? How was it built? What design patterns does it use? If you want me to just nod, agree, and say that if you built the perfect machine it will totally take over, it’s not going to happen until I get some technical data about the machine in question and its architecture, and find everything at least plausible.

  • http://wading-in.net/walkabout Just Al

    Say that I and my friends DID build such a machine, a machine that could conquer us all. Then, in a million years, could it ever exist? Yes or no? I’ll explicitly build it.

    How about if you and your friends create a speech-recognition package that actually works? That seems like a basic, fundamental step that might show you have the faintest hope of accomplishing such grandiose fantasies. And then you can design an inexpensive printer that doesn’t clog, or jam, or fail after 18 months. Is that setting the bar too high for you? Actually doing something useful? I realize these are a far cry from self-reliance, but that means it should be astoundingly easy for you, right? By the end of the year, you think?

  • Greg Fish

    “How about if you… create a speech-recognition package that actually works?”

    Actually, there’s a number of speech recognition programs that work quite well for the vast majority of people. Have you seen the new Google Voice with BabelFish? It looks pretty good. As for the printers, they’d be a lot better if they weren’t being built on the extreme cheap.

  • Pierce R. Butler

    Speech recognition for transcription, process control, etc, = major progress.

    Speech recognition as in, um, cognition – being able to verbally restate a given situation coherently, to parse the logic and look up relevant references of oral input, or just the input side of Turing tests… well, that would be major-er progress, but if we had some ham we could have ham and eggs if we had some eggs.

  • Brett

    Pardon me, but who the hell is building a non-military robot that refuses to have itself shut off and tries to act like a virus to randomly reproduce?

    I think the idea is somebody will create a very intelligent AI capable of self-modification around some goal, but without specific limits in what it will do in its goal structure. In other words, the computer might decide that, as part of its goal of finding out every possible victory set-up in chess, it needs to co-opt industrial and energy resources currently used by human beings.