[ weird things ] | why defining a.i. is harder than it looks

why defining a.i. is harder than it looks

For nearly than half a century, the consensus on what makes a machine intelligent has eluded the computer science field, and it's not getting any easier with more time and new technology.
robot in deep thought
Illustration by Wade Acuff

Do you think that the concept of artificial intelligence is relatively new, emerging as computers grew in power, complexity and memory? Actually, the quest for creating a synthetic cognitive system began with the very first computers, huge machines that couldn’t even touch the capabilities of today’s lowest end netbooks. In 1955, Dartmouth computer scientist John McCarthy proposed to study whether it was possible to build a device that could learn, solve problems and improve its abilities. As a feature at Silicon.com shows, the answer to this question still eludes us, due in no small part to the different ways in which computer scientists tried to define artificial intelligence. And with no agreement on what intelligence actually entails, it seems that the AI of the future could be radically different from today’s popular conceptions of what it should be when it’s switched on.

There’s a reason why I keep hammering away at the lack of consensus on what constitutes intelligence in the computer world. Just like you can’t create software without knowing what it’s actually supposed to do, you can’t fully create a system the end goal of which is open to debate. Yes, technically you could build something and call it AI, but you’re going to have plenty of people who will disagree with your conception of what an AI system actually entails. Some of the experts quoted in the Silicon.com article make this massive problem in building intelligent computer agents extremely clear. We’ll start with Kevin Warwick, who is indeed a Singularitarian in case his constant experiments with turning himself into a cyborg to prepare himself for the future didn’t make that abundantly clear for some of those following his studies…

By 2050 we will have gone through the Singularity and it will either be intelligent machines actually dominant — The Terminator scenario — or it will be cyborgs, upgraded humans. I really, by 2050, can’t see humans still being the dominant species. I just cannot believe that the development of machine intelligence would have been so slow as to not bring that about.

With all due respect to Professor Warwick, one of these things is not like the other. Cyborgs are not just a type of an intelligent machine. They’re humans. They already exist and they’re getting more and more advanced as the technology used to fuse flesh with machine steadily improves. To me, personally, this kind of research is one of the most amazing and intellectually stimulating areas of computer science and I also feel that it’s not a matter of if most people will become cyborgs but when.

However, that’s not going to make our species just odd minorities in a technological world. Biologically we’ll be pretty much the same as we are today, evolving in the background as we always have. Maybe being cyborgs could alter the way natural selection will work with us, but that’s a hypothesis in the back of my mind rather than an actual theory. The bottom line here is that we can’t just replace humans for cyborgs or AI and use the latter two interchangeably. That’s just wrong. Oh, and speaking of being wrong, there’s a doozy of a quote from the Singularity’s top general, Ray Kurzwei.

Pick up any complex product and it was designed at least in part by intelligent computer-assisted design and assembled in robotic factories with inventory levels [which are] controlled by intelligent just-in-time inventory systems. [These] algorithms automatically detect credit card fraud, diagnose electrocardiograms and blood cell images, fly and land airplanes, guide intelligent weapons and a lot more.

The reason why those algorithms are intelligent is because there are teams of people who write them. If there were no intelligent humans telling the computer what to do, they would just sit there like bricks. For example, I was recently working on a proof of concept for a kind of physics calculator. It was given a module with all sorts of relevant formulas and ways to call these formulas. What Ray is claiming here is that the program’s ability to take a conceptual object with a certain amount of solar masses and calculate what would happen to it when it collapses into a black hole is the achievement of the application rather than the fact that I wrote detailed code which tells the computer how to actually do the calculations. Pardon me if I’m not willing to concede my efforts to the machine and neither is any programmer I know. And with that let’s move on to a quote on what capacity a fully fledged AI system should have from futurist and philosopher Nick Bostrom.

Depending on the assumptions you make you might think that the most powerful supercomputers today are just beginning to reach the lower end of the estimates for a human brain’s processing power. But it might be that they still have two, three orders of magnitude to go before we match the kind of computation power of the human brain.

There’s a special respect we should give Bostrom because he really tried to give us some requirements for a system capable of both artificial intelligence and exceeding human knowledge and brainpower. However the idea he had in mind simply doesn’t work for reasons detailed in an older post. As other experts in the article point out, processing power is meaningless because the important thing is not how fast our brain processes something, but the path it takes to turn that processes into something meaningful. That’s why IBM’s big claim about simulating the brainpower of a cat falls flat on its face when we put it to the test and neuroscientists seem to be less than impressed, especially those trying to replicate an accurate picture of the brain.

Who cares how fast you can run through a set of commands? How will a computer be able to tackle a complex and nuanced problem in which many solutions can be correct? That’s the big question. Lets remember that most of our brainpower is used to automate tasks like walking, breathing, driving, reading and reflexes rather than solving complex, abstract problems. That’s an ability that would take much, much more than the right number of teraflops to match. And there’s a question whether this would even be possible to simulate without taking on the highly subjective and philosophically thorny issues like consciousness and its role in cognition…

# tech // artificial intelligence / computer science / cyborg / intelligence


  Show Comments