looking for a ghost in the machine
Let's review the seminal paper which gave rise to the idea that enough computers reading enough data at the right speed will eventually create a super-intelligent entity.
A short while ago, I wrote about some of the challenges involved in creating artificial intelligence and raised the question of how exactly a machine would spontaneously attain self-awareness. While I’ve gotten plenty of feedback about how far technology has come so far and how it’s imminent that machines will become much smarter than us, I never got any specifics as to how exactly this would happen. To me, it’s not a philosophical question because I’m used to looking at technology from a design and development standpoint. When I ask for specifics, I’m talking about functional requirements. So far, the closest thing to outlining the requirements for a super-intelligent computer is a paper by University of Oxford philosopher and futurist Nick Bostrom.
The first thing Bostrom tries to do is to establish a benchmark by how to grade what he calls a super-intellect and qualifying his definition. According to him, this super-intellect would be smarter than any human mind in every capacity from the scientific to the creative. It’s a pretty lofty goal because designing something smarter than yourself requires that you build something you don’t fully understand. You might have a sudden stroke of luck and succeed, but it’s more than likely that you’ll build a defective product instead. Imagine building a DNA helix from scratch and with no detailed manual to go by. Even if you have all the tools and know where to find some bits of information to guide you, when you don’t know exactly what you’re doing, the task becomes very challenging and you end up making a lot of mistakes along the way.
There’s also the question of how exactly we evaluate what the term smarter means. In Bostrom’s projections, when you have an intelligent machine become fully proficient in a certain area of expertise like say, medicine, it could combine with another machine which has an excellent understanding of physics and so on until all this consolidation leads to a device that knows all that we know and can use all that cross-disciplinary knowledge to gain insights we just don’t have yet. Technologically that should be possible, but the question is whether a machine like that would really be smarter than humans per se. It would be far more knowledgeable than any individual human, granted. But it’s not as if there aren’t experts in particular fields coming together to make all sorts of cross-disciplinary connections and discoveries. What Bostrom calls a super-intellect is actually just a massive knowledge base that can mine itself for information.
The paper was last revised in 1998 when we didn’t have the enormous digital libraries we take for granted in today’s world. Those libraries seem a fair bit like Bostrom’s super-intellect in their function and if we were to combine them to mine their depths with sophisticated algorithms which look for cross-disciplinary potential, we’d bring his concept to life. But there’s not a whole lot of intelligence there. Just a lot of data, much of which would be subject to change or revision as research and discovery continue. Just like Bostrom says, it would be a very useful tool for scientists and researchers. However, it wouldn’t be thinking on its own and giving the humans advice, even if we put all this data on supercomputers which could live up to the paper’s ambitious hardware requirements. Rev it up to match the estimated capacity of our brain, it says, and watch a new kind of intellect start waking up and take shape with the proper software.
According to Bostrom, the human brain operates at 100 teraflops, or 100 trillion floating point operations per second. Now, as he predicted, computers have reached this speed by 2004 and went far beyond that. In fact, we have supercomputers which are as much as ten times faster. Supposedly, at these operating speeds, we should be able to write software which allows supercomputers to learn by interacting with humans and sifting through our digitized knowledge. But the reality is that we’d be trying to teach an intimate object made of metal and plastic how to think and solve problems, something we’re already born with and hone over our lifetimes. You can teach someone how to ride a bike and how to balance, but how exactly would you teach someone to understand the purpose of riding a bike? How would you tell someone with no emotion, no desires, no wants and no needs why he should go anywhere? That deep layer of motivation and wiring has taken several billion years to appear and was honed over a 600 million additional years of evolution. When we start trying to make an AI system comparable to ours, we’re effectively way behind from the get-go.
To truly create an intelligent computer which doesn’t just act as if it’s thinking or do mechanical actions which are easy to predict and program, we’d need to impart in all that information in trillions of lines of code and trick circuitry into deducing it needs to behave like a living being. And that’s a job that couldn’t be done in less than century, much less in the next 20 to 30 years as projected by Ray Kurzweil and his fans.