Archives For intelligence

brainpower

According to a widely reported paper by accomplished molecular geneticist Jerry Crabtree, the human species is getting ever less intelligent because our society removed the selective drives to nurture intelligence and get rid of mutations that can make us dumber. This is not a new idea by any means, in fact it’s been a science fiction trope for many years and had it’s own movie to remind us of the gloom and doom that awaits us if we don’t hit the books: Idiocracy. Crabtree’s addition to it revolves around some 5,000 genes he identified as playing a role in intelligence by analyzing the genetic roots of certain types of mental retardation. Then, he posits that because we tend to live in large, generally supportive communities, we don’t have to be very smart to get to a reproductive age and have plenty of offspring. Should mutations that make us duller rear their ugly heads in the next few thousand years, there’s no selective pressure to weed them out because the now dumber future humans will still be able to survive and reproduce.

Evolution does have its downsides, true, but Crabtree ignores two major issues with his idea of humanity’s evolutionary trajectory. The first is that he ignores beneficial mutations and that just two or three negative mutations won’t necessarily stunt our brains. Geneticists who reviewed his paper and decided to comment say that Crabtree’s gloom and doom just isn’t warranted by the evidence he presents, and that his statistical analysis leaves a lot to be desired. The second big issue, one that I haven’t yet seen addressed, is that Crabtree doesn’t seem to have any working definition of intelligence. These are not the days of eugenicists deluding themselves about their genetic superiority to all life on Earth and most scientifically literate people know that survival of the fittest wasn’t Darwin’s description of natural selection, but a catchphrase created by Herbert Spencer. Natural selection is the survival of the good enough in a particular environment, so we could well argue that as long as we’re smart enough to survive and reproduce, we’re fine.

This means that Crabtree’s description of us being intellectual inferiors of our ancient ancestors is at best, irrelevant and at worst pointless. However, it’s also very telling because it fits so well with the typical assessment of modern societies by eugenicists. They look at the great names in history, both scientific and creative, and wonder where our geniuses are. But they forget that we do have plenty of modern polymaths and brilliant scientists and that in Newton’s day, the typical person was illiterate and had no idea that there was such a thing as gravity or optics and really couldn’t be bothered to give less of a damn. Also, how do we define genius anyway? With an IQ test? We know those only measure certain pattern recognition and logic skills and anyone could learn how to score highly on them with enough practice. You can practice test your way to be the next Mensa member so you can talk about being in Mensa and how high your IQ scores were, which in my experience tend to be the predominant activities of Mensa members. But they are members of an organization created to guide us dullards to a better tomorrow after all…

But if IQ scores are a woefully incomplete measure of intelligence, what isn’t? Depends on who’s doing the measuring and by what metric. One of the most commonly cited factoids from those in agreement with Crabtree is how much time is being spent on Facebook an watching reality TV instead of reading the classics and inventing warp drives or whatnot. But is what we usually tend to call book smarts necessary for survival? What we consider to be trivial knowledge for children today was once considered the realm of brilliant, highly educated nobles. Wouldn’t that make us smarter than our ancestors because we’ve been able to parse the knowledge they accumulated to find the most useful and important theories and ideas, disseminate them to billions, and make things they couldn’t have even imagined in their day? How would Aristotle react to a computer? What would Hannibal think of a GPS? Would the deleterious genetic changes Crabtree sees as an unwelcome probability hamper our ability to run a society, and if so, how?

Without knowing how he views intelligence and how he measures it, all we have is an ominous warning and one that single-mindedly focuses only on potential negatives rather than entertain potential positives alongside them, and making conclusions about their impact on a somewhat nebulous concept which isn’t defined enough to support such conclusions. In fact, the jury is still out on how much intelligence is nature and how much is nurture, especially when we consider a number of failed experiments with designer babies who were supposed to be born geniuses. We can look at families of people considered to be very intelligent and note that they tend to have smart kids. But are the kids smart because their parents are smart or because they’re driven to learn by parents who greatly value academics? We don’t know, but to evolution, all that matters is that these kids secure a mate and reproduce. To look for selection’s role beyond that seems more like an exercise in confirmation bias than a scientific investigation into the origins of human intelligence. That research is much more complex and elaborate than gene counting…

Share

Contrary to what you might think from my posts about the notion of the Technological Singularity, I do take the claims made by Singularitarians quite seriously and take time to look at their arguments. Now, often times I’m reading papers dealing with very abstract ideas, few tangible plans for a particular system, and rather vague, philosophical definitions which have little to do with any technical designs. Recently, however, I took a look at the work of Shane Legg at the recommendation of Singularity Institute’s Michael Anissimov, and found some real math into which to sink my teeth. Legg’s goal was to come up with a way to measure intelligence in a very pure and abstract form, especially as it applies to machines, and provide a better definition of Singularitarian terms like “super-intelligence,” creating a very interesting paper along the way. But, as you probably guessed already, there are some issues with his definitions of intelligence and what his formula truly measures…

To make a long story short, Legg is measuring outcomes of an intelligent agent’s performance in a probability game based on what strategies should yield the best results and the biggest rewards over time. And that’s an effective way to tackle intelligence objectively since in the natural world, bigger and more complex brains with new abilities are encouraged through rewards such as food, water, mating, luxuries, and of course, a longer, better lifespan. But there are a few problems in applying a formula to measure performance encouraged by a reward of some sort to a bona fide intellect, especially when it comes to AI. Humans program the strategy the machine will need to meet these intelligence tests and are basically doing all the work. Even if the machine in question does have to learn and adapt, it’s following human algorithms to do so. Compare that to training any intelligent animal which learns the task, figures out exactly what it needs to do and how, then finds shortcuts that either maximize the reward or reduce the time between rewards. Legg’s formula can measure outcomes in both cases, but what it can’t measure is that a computer has been “pre-wired” to do something while mice, dogs, or pigs, for example, effectively “re-wired” their brains to accomplish a new task.

The paper is keenly aware that people like me would question the “how” of the measured outcomes, not just the grading curve and circumvents this problem by saying that the formula in question is concerned only with the outcomes. Well that hardly seems fair, does it? After all, we can’t just ignore the role of creativity or any other facets of what we commonly call intelligence, or make the task of defining and building AI easier with various shortcuts meant to lower the bar for a computer system we want to call intelligent. Just as Legg’s preamble points out, using standardized IQ tests which deal with certain logical and mathematical skills isn’t necessarily an accurate summation of intelligence, just some facets of it that can be consistently measured. To point this out, then go on to create a similar test taking it up one notch in abstraction and say that how well a subject met certain benchmarks is all that matters, doesn’t seem to break any new ground and countering a pretty important question by saying that it’s just out of the work’s scope seems like taking a big shortcut. Even when we cut out emotions, creativity and consciousness, we’re still left with a profound difference between an intelligent biological entity and a computer. Although patterns of neurons in brains share striking similarities with computer chips, biology and technology function in very different ways.

When we build a computer, we design it to do a certain range of things and give it instructions which predict a range of possible problems and events that come up during an application’s execution. If we can take Legg’s formula and design a program to do really well at the games he outlines, adopting the strategies he defines as indicative of intelligence, who’s actually intelligent in this situation? Legg and programmers who wrote this kind of stuff for a typical homework assignment in college, or the computer that’s being guided and told how to navigate through the IQ test? Searle’s Chinese Room analogy actually comes into play in this situation. Now if we were compare that to humans, who are born primed for learning and with the foundations of an intellect, playing the same games, the fundamental process behind the scenes becomes very different. Instead of just consulting a guide telling them how to solve the problems, they’ll be actively changing their neural wiring after experimenting and finding the best possible strategy on their own. While we can pretend that the how doesn’t matter when trying to define intelligence, the reality is that living things like us are actually guiding computers, telling them how we solve problems in code, then measuring how well we wrote the programs. To sum it up, we’re indirectly grading our own intelligence by applying Legg’s formula to machines.

The same can be said about a hypothetical super-intelligence which we’ve encountered before in a paper by futurist Nick Bostrom where it was very vaguely and oddly defined. Legg’s definition is much more elegant, requiring that in any situation where an agent can earn a reward, it finds the correct strategy to get the most it possibly can out of the exercise. But again, apply this definition to machines and you’ll find that if we know the rules of the game our AI will have to beat, we can program it to perform almost perfectly. In fact, when talking about “super-human AI,” many Singularitarians seem to miss the fact that there are quite a few tasks in which computers are far better than humans will ever be. Even our ordinary bargain bin netbook can put virtually any math whiz to shame. Try multiplying 1.234758 × 10^33 by 4.56793 × 10^12. Takes a while, doesn’t it? Not for your computer which can do it in a fraction of a millisecond though. Likewise, your computer can search more than a library’s worth of information in a minute while you may spend the better part of a few months to do the same thing. Computers can do a number of tasks with super-human speed and precision. That’s why we use them and rely on them. They reached super-human capabilities decades ago but because we have to write a program to tell them how to do something, they’re still not intelligent while we are.

In fact, I think that by using computers and outsourcing detail-oriented, precision and labor intensive tasks for which evolution didn’t equip our brains is in itself a demonstration of intelligence in both logical and creative realms. In our attempts to define computer intelligence, we need to remember that computers are tools and if we didn’t have access to them, we could still find ways of going about our day to day tasks while any computer without any explicit directions from us would be pretty much useless. Now, when computers start writing their own code without leaving a tangled mess and optimizing their own performance without any human say in the matter, then we might be on to something. But until that moment, any attempt to grade a machine’s intellect is really a roundabout evaluation of the programmers who wrote its code and the quality of their work.

[ illustration by Hossein Afzali ]

Share

Having seen that babies seem to have an innate sense of rudimentary morality, we’ve gotten a little glimpse into the kind of research that can answer fundamental questions about what makes us who we are. And while we’re learning about the evolution of complex social interactions from infants, we can also apply a number of these findings to one of the biggest and most complex challenges in computer science: building an artificial intellect capable of passing the Turing test. Just to get a better idea of what we’re talking about, let me bring back Jeffrey, the robot whose feelings were carelessly hurt by an engineer in Intel’s Super Bowl commercial…

Now let’s down break the mechanics of this interaction. The robot hears a conversation and understands the words and the context. It identifies what the engineer is talking about with so much excitement and finds out it was being indirectly put down while being totally ignored. It gets offended and responds with sadness and an equivalent of crying. Sounds simple, right? Well, consider that Jeffrey will be science fiction for at least the next decade, if not longer, and even then, its intelligence is roughly on par with a six to eight month infant who was born with the brain wiring which makes everything we listed above either innate, or achievable as soon as the child will start understanding tone and picking up on basic context. In this comparison, babies have an unfair advantage since they have millions of years of evolution on their side as well as being wired to start learning, connecting, and forming social bonds since day one. Machines start with a blank slate. What we know about the surprisingly complex psychology of infant minds seems to be telling us that AI theories which use the way babies learn seemingly from scratch as their starting points are mistaken since humans are essentially pre- wired to do what they do and our formative years are only possible because of this.

We could even argue that the roots of what enabled human intellect started with the very first mammals, which appeared around 200 million years ago. Nature has an enormous head start on intelligence, which evolved in squids, octopi, cetaceans, birds like parrots as well as primates. Considering that machines have none of the plasticity or the advantages undergoing of eons worth of experiments which use evolutionary algorithms, it’s a pretty big stretch for us to jump into trying to simulate human intelligence, which can be rather hard to define in terms of concrete functional requirements. This doesn’t mean that we’d need to wait millions of years for an intelligent system of course. Trials can be ran much faster in the lab than in nature. But rather than starting with something as nebulous and abstract as the human mind, maybe we should give the alternative method of modeling insect intelligence a shot. It would be far less resource intensive and allow us to get into the real basics of what an intellect requires, without bothering with languages and contexts right away. How? Well, as detailed in the link, the major difference between insect minds and brains like ours is the repetition of neuron circuits which are generally thought to allow for more precise control over large bodies and enable ever more complex mechanics and social interactions as a very useful and evolutionary advantageous side-effect. If you can track down the right patterns, you may be one step closer to solving the mysteries of intelligence…

Share

Do you think that the concept of artificial intelligence is relatively new, emerging as computers grew in power, complexity and memory? Actually, the quest for creating a synthetic cognitive system began with the very first computers, huge machines that couldn’t even touch the capabilities of today’s lowest end netbooks. In 1955, Dartmouth computer scientist John McCarthy proposed to study whether it was possible to build a device that could learn, solve problems and improve its abilities. As a feature at Silicon.com shows, the answer to this question still eludes us, due in no small part to the different ways in which computer scientists tried to define artificial intelligence. And with no agreement on what intelligence actually entails, it seems that the AI of the future could be radically different from today’s popular conceptions of what it should be when it’s switched on.

There’s a reason why I keep hammering away at the lack of consensus on what constitutes intelligence in the computer world. Just like you can’t create software without knowing what it’s actually supposed to do, you can’t fully create a system the end goal of which is open to debate. Yes, technically you could build something and call it AI, but you’re going to have plenty of people who will disagree with your conception of what an AI system actually entails. Some of the experts quoted in the Silicon.com article make this massive problem in building intelligent computer agents extremely clear. We’ll start with Kevin Warwick, who is indeed a Singularitarian in case his constant experiments with turning himself into a cyborg to prepare himself for the future didn’t make that abundantly clear for some of those following his studies…

By 2050 we will have gone through the Singularity and it will either be intelligent machines actually dominant – The Terminator scenario – or it will be cyborgs, upgraded humans. I really, by 2050, can’t see humans still being the dominant species. I just cannot believe that the development of machine intelligence would have been so slow as to not bring that about.

With all due respect to Professor Warwick, one of these things is not like the other. Cyborgs are not just a type of an intelligent machine. They’re humans. They already exist and they’re getting more and more advanced as the technology used to fuse flesh with machine steadily improves. To me, personally, this kind of research is one of the most amazing and intellectually stimulating areas of computer science and I also feel that it’s not a matter of if most people will become cyborgs but when. However, that’s not going to make our species just odd minorities in a technological world. Biologically we’ll be pretty much the same as we are today, evolving in the background as we always have. Maybe being cyborgs could alter the way natural selection will work with us, but that’s a hypothesis in the back of my mind rather than an actual theory. The bottom line here is that we can’t just replace humans for cyborgs or AI and use the latter two interchangeably. That’s just wrong. Oh, and speaking of being wrong, there’s a doozy of a quote from the Singularity’s top general, Ray Kurzwei.

Pick up any complex product and it was designed at least in part by intelligent computer-assisted design and assembled in robotic factories with inventory levels [which are] controlled by intelligent just-in-time inventory systems. [These] algorithms automatically detect credit card fraud, diagnose electrocardiograms and blood cell images, fly and land airplanes, guide intelligent weapons and a lot more.

The reason why those algorithms are intelligent is because there are teams of people who write them. If there were no intelligent humans telling the computer what to do, they would just sit there like bricks. For example, I was recently working on a proof of concept for a kind of physics calculator. It was given a module with all sorts of relevant formulas and ways to call these formulas. What Ray is claiming here is that the program’s ability to take a conceptual object with a certain amount of solar masses and calculate what would happen to it when it collapses into a black hole is the achievement of the application rather than the fact that I wrote detailed code which tells the computer how to actually do the calculations. Pardon me if I’m not willing to concede my efforts to the machine and neither is any programmer I know. And with that let’s move on to a quote on what capacity a fully fledged AI system should have from futurist and philosopher Nick Bostrom.

Depending on the assumptions you make you might think that the most powerful supercomputers today are just beginning to reach the lower end of the estimates for a human brain’s processing power. But it might be that they still have two, three orders of magnitude to go before we match the kind of computation power of the human brain.

There’s a special respect we should give Bostrom because he really tried to give us some requirements for a system capable of both artificial intelligence and exceeding human knowledge and brainpower. However the idea he had in mind simply doesn’t work for reasons detailed in an older post. As other experts in the article point out, processing power is meaningless because the important thing is not how fast our brain processes something, but the path it takes to turn that processes into something meaningful. That’s why IBM’s big claim about simulating the brainpower of a cat falls flat on its face when we put it to the test and neuroscientists seem to be less than impressed, especially those trying to replicate an accurate picture of the brain. Who cares how fast you can run through a set of commands? How will a computer be able to tackle a complex and nuanced problem in which many solutions can be correct? That’s the big question. Lets remember that most of our brainpower is used to automate tasks like walking, breathing, driving, reading and reflexes rather than solving complex, abstract problems. That’s an ability that would take much, much more than the right number of teraflops to match. And there’s a question whether this would even be possible to simulate without taking on the highly subjective and philosophically thorny issues like consciousness and its role in cognition…

Share

We don’t like insects. Besides their annoying and sometimes dangerous bites, alarming habits of carrying all sorts of diseases and bizarre body plans that seem to be completely alien compared to our familiar tetrapod arrangement, they also seem to be everywhere, trying to sink their mouthparts into just about everything, from our food to our flesh. And when we try to control their populations, they quickly evolve resistance to most of our chemical weapons and carry on an usual. Not even mass extinctions seem to bother them all that much. But hey, at least they’re just little eating machines which function solely by instinct, right? Well, actually, it just so happens that scientists working with a number of insects found that bugs have some basic intelligence and their discoveries are making us question whether bigger is necessarily better when it comes to brainpower.

Usually when we think about intelligence, we think of primates and cetaceans with large brains teeming with between 85 and 200 billion neurons. However, we’ve known for a while that not every neuron is necessary for consciousness and intelligent thought and only some circuits of the brain actually perform cognitive tasks. So obviously, the bigger the brain, the more complex the cognitive circuits, the more elaborate the intellect and if whales had appendages could use tools the way we do, they’d be building interstellar spaceships by now as their immense intellects drove them to explore the universe, right? Well, in one of nature’s curveballs, it turns out that things aren’t quite that simple. Instead, large brains tend to have a lot of repetition of the same sets of neuron circuits. And that doesn’t necessarily equate to intelligence since all those repetitions are now thought to be needed to increase the amount of control large animals like us have over our bodies by enabling much more processing to be done for everything from fine motor skills to decision-making, kind of like a computer designed to perform more tasks may need more memory and CPUs.

But a small organism doesn’t have so many cells to control and can fit some very elaborate mental circuitry in a pinhead sized brain. Several hundred neurons give the ability to count. A few thousand create sentient, and perhaps even sapient, thought. If that’s really the case, then it seems that we’re barking up the wrong tree with cognitive computing concepts and AI projects. Instead of trying to simulate huge numbers of neurons, then bragging about it as a step towards emulating real brainpower, we should focus on those individual circuits and model the brains of insects rather than mammals. The results won’t be a charming humanoid intellect of science fiction but a working underpinning which can be used to build up more elaborate functions which we know exist in insects at some level but not to the extent it’s present in our brains, like high level abstraction. It would be a much more feasible project, something that could possibly be done on a high end laptop instead of giant and every expensive supercomputers. Still, there will be a big question of how much of a leap would need to be made between being able to count and recognize faces (something we can already program just about any robot to do), and performing complex analytical tasks.

This is where modeling an insect brain with just under a million neurons could help us by showing how these processes are done on a very fundamental level. The next hurdle will be to determine whether we can take the results of our reconstructions and simulations and ramp them up. Regardless of whether we could do that or not, we may learn something about the evolution of intelligence. Mainly, we can ask if arthropod cognition — or in this case we should say hexapod since we’re dealing with insects — fundamentally different from cognition that evolved in chordates like us, or are all brains just different incarnations of the same thing?

Share

Have you recently taken an IQ test and got a really nice, high score in the 120 to 145 range? Good job. You’re among the top 10% of the population. But before you rush to tell everyone about your results, consider that the high score on a standardized IQ exam doesn’t say anything about your overall intelligence. And while you may be good at logic, spotting patterns and abstract reasoning, you might be a total washout in the critical thinking department and not even know it. If you want to be really intelligent, you need both academic knowledge and what we generally call street smarts. Basically, you want to be a cross between a scientist and Bugs Bunny.

This is the point of an article in New Scientist that focuses on a long known flaw of today’s IQ tests, which are decent at grading working memory and basic logical skills but offer no way to figure out whether the test taker will actually use them in the real world. As a result, we could have someone get a score of 160 on one of the several standardized tests out there and find her totally incompetent in the real world because all that ability to reason and make decisions based on logic is limited solely to test taking.

But most researchers agree that the correlation between [IQ] and successful decision-making is weak. The exception is when people are warned that they might be vulnerable to a thinking bias, in which case those with high IQs tend to do better. This, says Evans, is because while smart people don’t always reason more than others, “when they do reason, they reason better”.

So next time you’re talking to a reasonably smart person and he or she drops a major whopper on you, filled to the brim with obvious pseudoscience or nonsensical New Age woo, you have an answer as to why someone who seems so clever can be so irrational. They really are clever, but they just don’t, or won’t, apply their critical thinking skills out in the real world. But that doesn’t mean they’re doomed to languish in the world of woo until the end of time. Just like you can train to get a better IQ test score, you can also train your mind to be far more skeptical and analytical. A scientific mindset is built by training and constant questioning and they can too. All they need is a good reason to start…

Share

A study of 1.1 million people conducted by a team of health experts found out that a person with a high IQ has a lower chance of dying at any given time than those with a lower IQ score. Because the researchers were interested whether there was some sort of correlation between IQ and unintentional injury, they primarily tracked mortality in groups of former Swedish soldiers who had to take detailed IQ tests. While it might seem like a convenience sample at first glance, the soldiers were conscripts which meant that many of the socioeconomic factors one would need to consider for the study were accurately represented.

danger signWhen combining IQ with various health indicators and causes of death, the experts found that there’s a tiny but persistent correlation between a higher IQ and a smaller chance of death in a car crash or from unintentional poisoning, suicide and disease. Even when demographics and income were accounted for, the correlation was still there. The explanation for this result rests on the idea that people with higher IQs are slightly more aware of dangers and take better care of themselves to prevent major health issues like hypertension and heart disease. If we consider that the difference in mortality is less than 6% at most, it’s not really that much better care.

There’s another issue that comes to mind when reading the study. Soldiers need to meet some sort of standard before they’re admitted into the military so the differences between those who scored highest on the IQ tests and those who scored lowest are very likely to be minimized and limit the randomness of the sample. So while the sample is better than just a convenience one, it’s still not as random as it should be. This limitation isn’t noted in the paper, but researchers do point out that the study was focused only on men and can’t be extended to women which is another important point. So far we only know this is a correlation for men aged 18 to early 40s.

Finally, by focusing on soldiers and their rate of mortality by the time they reach middle age, the study doesn’t consider environments in which an IQ score to put MENSA to shame wouldn’t be of any use. For example, soldiers in active war zones and people who live in dangerous places survive partly thanks to luck. Their intelligence won’t save them from a stray bullet or a bomb placed in just the right place at just the wrong time. Incorporating these possibilities into the study may negate the small advantage high IQ scorers seem to have. Chance plays a big part in whether a person survives a day and that’s something that needs to be considered before we pin survival solely to scores on an IQ test, no matter how elaborate.

But all that said, there is a good reason why someone would try to correlate IQ scores with the likelihood of death. Human intelligence was encouraged by natural selection exactly because a smarter creature is more aware of the risks around it and tries to mitigate them, surviving long enough to pass on its genes and with it, it’s wits. The intelligence we generally measure on paper, deals with a wide variety of talents and a complex body of academic knowledge. Simple smarts that help us keep an eye out for danger in our surroundings would need to be hereditary. The big question is how to reconcile academic IQ tests designed to measure abstract knowledge and talent with primal intelligence we see in almost every mammal.

See: Batty et al. (2008). IQ in Early Adulthood, Socioeconomic Position, and Unintentional Injury Mortality by Middle Age: A Cohort Study of More Than 1 Million Swedish Men American Journal of Epidemiology, 169 (5), 606-615 DOI: 10.1093/aje/kwn381

Share