[ weird things ] | measuring a computer’s iq the singularity way

measuring a computer’s iq the singularity way

Another day, another Singularitarian idea for measuring artificial intelligence, this time, through statistical problem solving.
curious robot

Contrary to what you might think from my posts about the notion of the Technological Singularity, I do take the claims made by Singularitarians quite seriously and take time to look at their arguments. Now, often times I’m reading papers dealing with very abstract ideas, few tangible plans for a particular system, and rather vague, philosophical definitions which have little to do with any technical designs. Recently, however, I took a look at the work of Shane Legg at the recommendation of Singularity Institute’s Michael Anissimov, and found some real math into which to sink my teeth. Legg’s goal was to come up with a way to measure intelligence in a very pure and abstract form, especially as it applies to machines, and provide a better definition of Singularitarian terms like “super-intelligence,” creating a very interesting paper along the way. But, as you probably guessed already, there are some issues with his definitions of intelligence and what his formula truly measures.

To make a long story short, Legg is measuring outcomes of an intelligent agent’s performance in a probability game based on what strategies should yield the best results and the biggest rewards over time. And that’s an effective way to tackle intelligence objectively since in the natural world, bigger and more complex brains with new abilities are encouraged through rewards such as food, water, mating, luxuries, and of course, a longer, better lifespan. But there are a few problems in applying a formula to measure performance encouraged by a reward of some sort to a bona fide intellect, especially when it comes to AI. Humans program the strategy the machine will need to meet these intelligence tests and are basically doing all the work. Even if the machine in question does have to learn and adapt, it’s following human algorithms to do so. Compare that to training any intelligent animal which learns the task, figures out exactly what it needs to do and how, then finds shortcuts that either maximize the reward or reduce the time between rewards. Legg’s formula can measure outcomes in both cases, but what it can’t measure is that a computer has been “pre-wired” to do something while mice, dogs, or pigs, for example, effectively “re-wired” their brains to accomplish a new task.

The paper is keenly aware that people like me would question the “how” of the measured outcomes, not just the grading curve and circumvents this problem by saying that the formula in question is concerned only with the outcomes. Well that hardly seems fair, does it? After all, we can’t just ignore the role of creativity or any other facets of what we commonly call intelligence, or make the task of defining and building AI easier with various shortcuts meant to lower the bar for a computer system we want to call intelligent. Just as Legg’s preamble points out, using standardized IQ tests which deal with certain logical and mathematical skills isn’t necessarily an accurate summation of intelligence, just some facets of it that can be consistently measured. To point this out, then go on to create a similar test taking it up one notch in abstraction and say that how well a subject met certain benchmarks is all that matters, doesn’t seem to break any new ground and countering a pretty important question by saying that it’s just out of the work’s scope seems like taking a big shortcut. Even when we cut out emotions, creativity and consciousness, we’re still left with a profound difference between an intelligent biological entity and a computer. Although patterns of neurons in brains share striking similarities with computer chips, biology and technology function in very different ways.

When we build a computer, we design it to do a certain range of things and give it instructions which predict a range of possible problems and events that come up during an application’s execution. If we can take Legg’s formula and design a program to do really well at the games he outlines, adopting the strategies he defines as indicative of intelligence, who’s actually intelligent in this situation? Legg and programmers who wrote this kind of stuff for a typical homework assignment in college, or the computer that’s being guided and told how to navigate through the IQ test? Searle’s Chinese Room analogy actually comes into play in this situation. Now if we were compare that to humans, who are born primed for learning and with the foundations of an intellect, playing the same games, the fundamental process behind the scenes becomes very different. Instead of just consulting a guide telling them how to solve the problems, they’ll be actively changing their neural wiring after experimenting and finding the best possible strategy on their own. While we can pretend that the how doesn’t matter when trying to define intelligence, the reality is that living things like us are actually guiding computers, telling them how we solve problems in code, then measuring how well we wrote the programs. To sum it up, we’re indirectly grading our own intelligence by applying Legg’s formula to machines.

The same can be said about a hypothetical super-intelligence which we’ve encountered before in a paper by futurist Nick Bostrom where it was very vaguely and oddly defined. Legg’s definition is much more elegant, requiring that in any situation where an agent can earn a reward, it finds the correct strategy to get the most it possibly can out of the exercise. But again, apply this definition to machines and you’ll find that if we know the rules of the game our AI will have to beat, we can program it to perform almost perfectly. In fact, when talking about “super-human AI,” many Singularitarians seem to miss the fact that there are quite a few tasks in which computers are far better than humans will ever be. Even our ordinary bargain bin netbook can put virtually any math whiz to shame. Try multiplying 1.234758 × 10³³ by 4.56793 × 10¹². Takes a while, doesn’t it? Not for your computer which can do it in a fraction of a millisecond though. Likewise, your computer can search more than a library’s worth of information in a minute while you may spend the better part of a few months to do the same thing. Computers can do a number of tasks with super-human speed and precision. That’s why we use them and rely on them. They reached super-human capabilities decades ago but because we have to write a program to tell them how to do something, they’re still not intelligent while we are.

In fact, I think that by using computers and outsourcing detail-oriented, precision and labor intensive tasks for which evolution didn’t equip our brains is in itself a demonstration of intelligence in both logical and creative realms. In our attempts to define computer intelligence, we need to remember that computers are tools and if we didn’t have access to them, we could still find ways of going about our day to day tasks while any computer without any explicit directions from us would be pretty much useless. Now, when computers start writing their own code without leaving a tangled mess and optimizing their own performance without any human say in the matter, then we might be on to something. But until that moment, any attempt to grade a machine’s intellect is really a roundabout evaluation of the programmers who wrote its code and the quality of their work.

# tech // artificial intelligence / computer science / intelligence


  Show Comments