why defining a.i. is harder than it looks

February 12, 2010

Do you think that the concept of artificial intelligence is relatively new, emerging as computers grew in power, complexity and memory? Actually, the quest for creating a synthetic cognitive system began with the very first computers, huge machines that couldn’t even touch the capabilities of today’s lowest end netbooks. In 1955, Dartmouth computer scientist John McCarthy proposed to study whether it was possible to build a device that could learn, solve problems and improve its abilities. As a feature at Silicon.com shows, the answer to this question still eludes us, due in no small part to the different ways in which computer scientists tried to define artificial intelligence. And with no agreement on what intelligence actually entails, it seems that the AI of the future could be radically different from today’s popular conceptions of what it should be when it’s switched on.

There’s a reason why I keep hammering away at the lack of consensus on what constitutes intelligence in the computer world. Just like you can’t create software without knowing what it’s actually supposed to do, you can’t fully create a system the end goal of which is open to debate. Yes, technically you could build something and call it AI, but you’re going to have plenty of people who will disagree with your conception of what an AI system actually entails. Some of the experts quoted in the Silicon.com article make this massive problem in building intelligent computer agents extremely clear. We’ll start with Kevin Warwick, who is indeed a Singularitarian in case his constant experiments with turning himself into a cyborg to prepare himself for the future didn’t make that abundantly clear for some of those following his studies…

By 2050 we will have gone through the Singularity and it will either be intelligent machines actually dominant – The Terminator scenario – or it will be cyborgs, upgraded humans. I really, by 2050, can’t see humans still being the dominant species. I just cannot believe that the development of machine intelligence would have been so slow as to not bring that about.

With all due respect to Professor Warwick, one of these things is not like the other. Cyborgs are not just a type of an intelligent machine. They’re humans. They already exist and they’re getting more and more advanced as the technology used to fuse flesh with machine steadily improves. To me, personally, this kind of research is one of the most amazing and intellectually stimulating areas of computer science and I also feel that it’s not a matter of if most people will become cyborgs but when. However, that’s not going to make our species just odd minorities in a technological world. Biologically we’ll be pretty much the same as we are today, evolving in the background as we always have. Maybe being cyborgs could alter the way natural selection will work with us, but that’s a hypothesis in the back of my mind rather than an actual theory. The bottom line here is that we can’t just replace humans for cyborgs or AI and use the latter two interchangeably. That’s just wrong. Oh, and speaking of being wrong, there’s a doozy of a quote from the Singularity’s top general, Ray Kurzwei.

Pick up any complex product and it was designed at least in part by intelligent computer-assisted design and assembled in robotic factories with inventory levels [which are] controlled by intelligent just-in-time inventory systems. [These] algorithms automatically detect credit card fraud, diagnose electrocardiograms and blood cell images, fly and land airplanes, guide intelligent weapons and a lot more.

The reason why those algorithms are intelligent is because there are teams of people who write them. If there were no intelligent humans telling the computer what to do, they would just sit there like bricks. For example, I was recently working on a proof of concept for a kind of physics calculator. It was given a module with all sorts of relevant formulas and ways to call these formulas. What Ray is claiming here is that the program’s ability to take a conceptual object with a certain amount of solar masses and calculate what would happen to it when it collapses into a black hole is the achievement of the application rather than the fact that I wrote detailed code which tells the computer how to actually do the calculations. Pardon me if I’m not willing to concede my efforts to the machine and neither is any programmer I know. And with that let’s move on to a quote on what capacity a fully fledged AI system should have from futurist and philosopher Nick Bostrom.

Depending on the assumptions you make you might think that the most powerful supercomputers today are just beginning to reach the lower end of the estimates for a human brain’s processing power. But it might be that they still have two, three orders of magnitude to go before we match the kind of computation power of the human brain.

There’s a special respect we should give Bostrom because he really tried to give us some requirements for a system capable of both artificial intelligence and exceeding human knowledge and brainpower. However the idea he had in mind simply doesn’t work for reasons detailed in an older post. As other experts in the article point out, processing power is meaningless because the important thing is not how fast our brain processes something, but the path it takes to turn that processes into something meaningful. That’s why IBM’s big claim about simulating the brainpower of a cat falls flat on its face when we put it to the test and neuroscientists seem to be less than impressed, especially those trying to replicate an accurate picture of the brain. Who cares how fast you can run through a set of commands? How will a computer be able to tackle a complex and nuanced problem in which many solutions can be correct? That’s the big question. Lets remember that most of our brainpower is used to automate tasks like walking, breathing, driving, reading and reflexes rather than solving complex, abstract problems. That’s an ability that would take much, much more than the right number of teraflops to match. And there’s a question whether this would even be possible to simulate without taking on the highly subjective and philosophically thorny issues like consciousness and its role in cognition…

Share
  • jclark

    “walking, breathing, driving, reading and reflexes” with the exception of reading do not figure prominently in current IQ tests that are supposed measure intelligence. I have always done well on IQ tests. That does not indicate in any way that I consider myself more intelligent than those who score lower. I am just differently configured. I know that I would not score as highly on any test of the other “tasks”. With no intention of being mean about it, consider Stephen Hawking. He is generally regarded as one of the most intelligent human beings ever to have lived.

    Are his restricted physical capablilities at least partially responsible for his superior intellect? Would a machine that lacked any consideration of the automated tasks but was capable of “solving complex, abstract problems” be “intelligent”? How would me know that we could rely on the solutions that such a machine produced? Turing nailed it shut when he proved that it would be possible to create a machine that could not be determined to be a machine merely by asking it questions. The Turing machine would not need any “intelligence” but we could not know that.

    AI is and likely will remain an elusive and nebulous concept.

  • Greg Fish

    Would a machine that lacked any consideration of automated tasks but was capable of “solving complex, abstract problems” be “intelligent”?

    Technically yes. Just to clarify, my comment about the use of our brainpower to keep us breathing, walking and doing all the other things we do on a sort of auto-pilot was to illustrate that just trying to match the human brain in a contest of computing power was meaningless because it’s not the processing speed, but the end result of what happens in the brain that matter.

  • Pierce R. Butler

    “Intelligence” as we know it is neither digital nor chip-based, and attempts to re-create it in such forms may be as wrong-footed as trying to build a weight-bearing wall out of meat.

    Consider the still-mostly-mysterious process of human memory (probably shared with most other vertebrate organisms): we seem to reconstruct a model of past experiences, rather than pulling out a file or album. This process is easily fooled, as shown by hundreds of psychological experiments indicating that, for example, we remember a string of numbers by storing their identity and sequence in separate “places”.

    The ability to assemble ad-hoc models of the external world (and what could be more external than the past?) may be the core function of the combined mental operations we call “intelligence”. Making silicon-based calculations to approximate such imprecision and improvisation could well be tougher than of the challenges keeping the NSA’s supercomputers up all night.