what babies can tell us about the future of a.i.

May 10, 2010

Having seen that babies seem to have an innate sense of rudimentary morality, we’ve gotten a little glimpse into the kind of research that can answer fundamental questions about what makes us who we are. And while we’re learning about the evolution of complex social interactions from infants, we can also apply a number of these findings to one of the biggest and most complex challenges in computer science: building an artificial intellect capable of passing the Turing test. Just to get a better idea of what we’re talking about, let me bring back Jeffrey, the robot whose feelings were carelessly hurt by an engineer in Intel’s Super Bowl commercial…

Now let’s down break the mechanics of this interaction. The robot hears a conversation and understands the words and the context. It identifies what the engineer is talking about with so much excitement and finds out it was being indirectly put down while being totally ignored. It gets offended and responds with sadness and an equivalent of crying. Sounds simple, right? Well, consider that Jeffrey will be science fiction for at least the next decade, if not longer, and even then, its intelligence is roughly on par with a six to eight month infant who was born with the brain wiring which makes everything we listed above either innate, or achievable as soon as the child will start understanding tone and picking up on basic context. In this comparison, babies have an unfair advantage since they have millions of years of evolution on their side as well as being wired to start learning, connecting, and forming social bonds since day one. Machines start with a blank slate. What we know about the surprisingly complex psychology of infant minds seems to be telling us that AI theories which use the way babies learn seemingly from scratch as their starting points are mistaken since humans are essentially pre- wired to do what they do and our formative years are only possible because of this.

We could even argue that the roots of what enabled human intellect started with the very first mammals, which appeared around 200 million years ago. Nature has an enormous head start on intelligence, which evolved in squids, octopi, cetaceans, birds like parrots as well as primates. Considering that machines have none of the plasticity or the advantages undergoing of eons worth of experiments which use evolutionary algorithms, it’s a pretty big stretch for us to jump into trying to simulate human intelligence, which can be rather hard to define in terms of concrete functional requirements. This doesn’t mean that we’d need to wait millions of years for an intelligent system of course. Trials can be ran much faster in the lab than in nature. But rather than starting with something as nebulous and abstract as the human mind, maybe we should give the alternative method of modeling insect intelligence a shot. It would be far less resource intensive and allow us to get into the real basics of what an intellect requires, without bothering with languages and contexts right away. How? Well, as detailed in the link, the major difference between insect minds and brains like ours is the repetition of neuron circuits which are generally thought to allow for more precise control over large bodies and enable ever more complex mechanics and social interactions as a very useful and evolutionary advantageous side-effect. If you can track down the right patterns, you may be one step closer to solving the mysteries of intelligence…

Share
  • http://acceleratingfuture.com/michael/blog Michael Anissimov

    I think you deify the mystery of intelligence a bit much. It seems to get in the way a bit of your analysis of AI.

    Also, I can define intelligence for you right now: intelligence is the ability to pursue complex goals in complex environments. There, done. Shane Legg has compiled dozens of definitions of intelligence in his PhD thesis and the vast majority of them are in close agreement. AI designers will obviously pick a definition of intelligence and use it, and I have the feeling that the public will display high levels of consensus on appraising whether an AI is truly intelligent or not.

  • Greg Fish

    I think you deify the mystery of intelligence a bit much.

    When looking at the biology behind the evolution of intelligence and how brains are put together, pointing out the complexity and the time scales over which these brains and the capacity for intelligent thought had to evolve probably shouldn’t be described as deification. You seem to think I’m in the same boat as Searle. I’m not. I work with design and code, and this is what shapes my approach to AI, not slack-jawed awe.

    Also, I can define intelligence for you right now: intelligence is the ability to pursue complex goals in complex environments.

    Sure, that’s a perfectly satisfactory philosophical and neurological definition, but that isn’t what I’m talking about when I refer to the functional requirements of intelligence in a computer science context. Functional requirements are detailed specs of what a particular system is supposed to perform, and often, how. You gave me the 60,000 ft. overview of the problem. I’m asking for a 10,000 ft. one, enough to visualize a system and lay out some of its main components.

    AI designers will obviously pick a definition of intelligence and use it…

    And then debate with each other about whose system is the most intelligent, whose exhibits enough complexity and meets the definition they consider to be the best… In the IT and comp sci worlds designers tend to compete and debate each others’ work and I don’t think you should be expecting it to change when AI is involved.