looking for a ghost in the machine

May 29, 2009

A short while ago, I wrote about some of the challenges involved in creating artificial intelligence and raised the question of how exactly a machine would spontaneously attain self-awareness. While I’ve gotten plenty of feedback about how far technology has come so far and how it’s imminent that machines will become much smarter than us, I never got any specifics as to how exactly this would happen. To me, it’s not a philosophical question because I’m used to looking at technology from a design and development standpoint. When I ask for specifics, I’m talking about functional requirements. So far, the closest thing to outlining the requirements for a super-intelligent computer is a paper by University of Oxford philosopher and futurist Nick Bostrom.

hive mind

The first thing Bostrom tries to do is to establish a benchmark by how to grade what he calls a super-intellect and qualifying his definition. According to him, this super-intellect would be smarter than any human mind in every capacity from the scientific to the creative. It’s a pretty lofty goal because designing something smarter than yourself requires that you build something you don’t fully understand. You might have a sudden stroke of luck and succeed, but it’s more than likely that you’ll build a defective product instead. Imagine building a DNA helix from scratch and with no detailed manual to go by. Even if you have all the tools and know where to find some bits of information to guide you, when you don’t know exactly what you’re doing, the task becomes very challenging and you end up making a lot of mistakes along the way.

There’s also the question of how exactly we evaluate what the term smarter means. In Bostrom’s projections, when you have an intelligent machine become fully proficient in a certain area of expertise like say, medicine, it could combine with another machine which has an excellent understanding of physics and so on until all this consolidation leads to a device that knows all that we know and can use all that cross-disciplinary knowledge to gain insights we just don’t have yet. Technologically that should be possible, but the question is whether a machine like that would really be smarter than humans per se. It would be far more knowledgeable than any individual human, granted. But it’s not as if there aren’t experts in particular fields coming together to make all sorts of cross-disciplinary connections and discoveries. What Bostrom calls a super-intellect is actually just a massive knowledge base that can mine itself for information.

The paper was last revised in 1998 when we didn’t have the enormous digital libraries we take for granted in today’s world. Those libraries seem a fair bit like Bostrom’s super-intellect in their function and if we were to combine them to mine their depths with sophisticated algorithms which look for cross-disciplinary potential, we’d bring his concept to life. But there’s not a whole lot of intelligence there. Just a lot of data, much of which would be subject to change or revision as research and discovery continue. Just like Bostrom says, it would be a very useful tool for scientists and researchers. However, it wouldn’t be thinking on its own and giving the humans advice, even if we put all this data on supercomputers which could live up to the paper’s ambitious hardware requirements. Rev it up to match the estimated capacity of our brain, it says, and watch a new kind of intellect start waking up and take shape with the proper software.

According to Bostrom, the human brain operates at 100 teraflops, or 100 trillion floating point operations per second. Now, as he predicted, computers have reached this speed by 2004 and went far beyond that. In fact, we have supercomputers which are as much as ten times faster. Supposedly, at these operating speeds, we should be able to write software which allows supercomputers to learn by interacting with humans and sifting through our digitized knowledge. But the reality is that we’d be trying to teach an intimate object made of metal and plastic how to think and solve problems, something we’re already born with and hone over our lifetimes. You can teach someone how to ride a bike and how to balance, but how exactly would you teach someone to understand the purpose of riding a bike? How would you tell someone with no emotion, no desires, no wants and no needs why he should go anywhere? That deep layer of motivation and wiring has taken several billion years to appear and was honed over a 600 million additional years of evolution. When we start trying to make an AI system comparable to ours, we’re effectively way behind from the get-go.

To truly create an intelligent computer which doesn’t just act as if it’s thinking or do mechanical actions which are easy to predict and program, we’d need to impart in all that information in trillions of lines of code and trick circuitry into deducing it needs to behave like a living being. And that’s a job that couldn’t be done in less than century, much less in the next 20 to 30 years as projected by Ray Kurzweil and his fans.

[ eerie illustration by Neil Blevins ]

Share
  • Vincius

    That’s an interesting point of view. But I differ from you when you say there’s no way a human could design a device that could outpass him in intelligence and creativity. It’s not a matter of writing down every single piece of information the machines will be able to access. For me, as times goes by and computers get more and more powerful, emergence will show up, gathering informations in ways we couldn’t have predicted, so as to become even more perfect than we thought they could ever be.

    Just like our evolution process made us inteligent beings, the mass amount of data available on a simple hd would allow computers to step their own path to perfection. Only that, by being raised from inorganic matter, it could evolve so much faster than ourselves. I just don’t see why should we think our technology will always be so dumb as it is now. Someday (maybe far from today), something made of bits and bytes will start showing counciousness.

    (I’m not an expert though, and sorry for my bad english)

  • Greg Fish

    Just to be clear, I didn’t say that humans can’t design something smarter than us. We can but it would be very difficult to do because we’d be working on a system we don’t really understand.

    I’m pretty skeptical about computer evolution though. We can’t assign random biological processes to static machines built as extensions of our needs. Humans evolved to be intelligent through specific mutations and natural selection. Computers would need to be engineered to not only change their hardware and software, but to optimize it as well, and that wouldn’t really be evolution. More like digital eugenics.

    But again, speaking from professional curiosity, I would absolutely love to take a look at the functional requirements for a Singularity-style AI system.

  • http://burgundybelle.livejournal.com Molly

    I’ve always been fascinated by the possibility of AI, and hoped to see it in my lifetime, but one word in the article dashed my hopes. Motivation. That never occured to me as a component of AI, although now I realize that’s what I was seeing in scifi all along.

  • musubk

    Musing about AI always brings me back to the conclusion that I have no idea what intelligence ‘is’. Let’s say we make some sort of super-advanced robot that’s indistinguishable from a human on the outside, even though it’s just following whatever was programmed into it. How can we objectively say a human has some quality of intelligence the robot doesn’t, if the only way we can distinguish between them is if someone tells us which is which? It makes me sway more towards thinking of us as automatons programmed by evolution than towards the robot as having gained some transcendent quality, though.

    I always liked Blade Runner, and the HAL9000 sequences of 2001 :P

  • Vincius

    Yet humans are capable of evolving not only biologically but mentally as well. Through science we can change our own bodies into something that fits our needs the best. I don’t see why, given the proper time, computers wouldn’t someday attain the ability to shape their own hardware (bodies). It was our ignorance that kept us away from bioengineering all this time long, but when artificial inteligence emerges from machines, it’ll have all our centuries of research avaible for improvements…

    I also think we should keep in mind that our brain wasn’t designed for writing or calculating either. It just happened that, by being able so solve simple problems of survival, we could use that intelligence so as to achieve more than just catching bananas. You’re right, we build robots fit to our needs, but why should we believe nothing will ever show up and surprise us?

  • Greg Fish

    It’s true that our minds weren’t specifically designed to do anything, they’re just there and they help us live and reproduce which means that our creativity is actually a side-effect of evolution’s vague rules.

    But computers were specifically designed for certain tasks. They’re an extension of our needs and they’re built to make our lives more efficient and convenient. There’s not very much room for some sort of evolutionary creativity when your task is defined by very rigid rules and you’re specifically built to do certain things and nothing else.

    For robots to evolve, we would need to build them to do exactly that.

  • Pierce R. Butler

    … thats a job that couldnt be done in less than century, much less in the next 20 to 30 years…

    Certainly engineers couldn’t do it in that time. For genetic algorithms to do “build something you dont fully understand” would be a function of how much hardware and logistical support was thrown at the problem. Whether that would constitute a “singularity” is by (convenient) definition beyond present prediction.

  • Kaptain

    Teraflop is *trillion* not billion.

  • Loeck

    I would think that it would 1. be easier to make a “blank” program (more or less) that could learn, or 2. It would be more likely that it would be a random programmer would just be flinging code around and accidentally make a code/algorithm that can “learn”.

  • Greg Fish

    “I would think that it would 1. be easier to make a “blank” program (more or less) that could learn…”

    Actually, that’s ridiculously difficult because since a computer lacks motivation to do what it needs to learn, we have to invent tasks for them and specify how to do them which takes many of the variables where innovation and intelligence can surface out of the equation.

    We can make computers learn, but they end up doing it on our terms because we have to tell them what learning is and the different ways to do it.