why intelligence is in the eye of the beholder
Perhaps one of the Singularity Institute’s biggest quibbles with my posts about artificial intelligence seems to be my emphasis on the complexity of the brain and the need to capture that complexity when it comes to bona fide synthetic intelligence. Apparently, I’m just not functionalistic enough and don’t quite get that when you put enough functional components together, you’ll get an intelligence. Well, it may come as a bit of a surprise to a number of Singularitarians who read this blog, but I actually focus primarily on (gasp!) functionalistic models of intelligent agents which could work by combining working building blocks to construct more elaborate and complex behaviors. Obviously, the most I could do is take a small step in that direction and it’ll be a long time before this kind of research truly pays off. But yes, I really do think that the first multitasking AI system would be based on a functional model, adapting how living things do what they do into their computing equivalents.
But honestly, I have trouble calling a system like that artificial intelligence because intelligence as we know it requires a sense of self, and the murky, vague, but very evident notion of consciousness. Machines, for all the things they can be made to do, can’t be conscious in the same sense we are. They don’t have feelings or any need for feelings or emotions. Because they never evolved, but instead were built to ease computational and data management tasks to help cut down on human error and save time and money, they’re simply inanimate objects which we can turn into puppets to do our bidding. An intelligent agent that can see us, hear us, or talk to us is just following coded instructions. This is why when many AI theorists talk about machine intelligence, what they’re really talking about are approaches to making robots and supercomputers more autonomous so they require as little human intervention as possible between being given a complicated task and completing it. But for the Singularitarians, the view of what artificial intelligence means seems to be rather different.
They seem to be looking for machines with which they could socially bond, or machines that would mark new and different kinds of intelligent life on our planet. For Ray Kurzweil in particular, artificial intelligence is like an odd, mystical object that could make all our dreams come true and provide answers to our most complex and profound questions about the human condition. But unfortunately for the Singularitarians, the notion of AI at a certain level being either a panacea or a world altering event is more of a philosophical position than anything else. We’re not going to have all-encompassing machine intelligence like we see in sci-fi movies or comics. It’s far too impractical, expensive, and time-consuming of a project. Instead, we’re going to have custom-built and narrowly-purposed intelligent agents to handle certain tasks. We don’t need them to bond with us or have emotions. We need them to make our lives more convenient, not to be our friends. And while we do program computers that can write poetry or tell jokes, these machines are fun and theoretically useful gimmicks rather than a sign of things to come, or a march towards the first synthetic super-intelligence. When it comes to tech and engineering, it’s very important to keep things in perspective, especially something as lofty as AI.
That said, it’s almost a certainty that if we make future robots aesthetically appealing and clever enough, there will be plenty of people who’ll treat these machines more like pets than cars or cell phones. But that bonding is only going to happen for the human. It’s not going to be a two way relationship, and it will almost always be a situation in which we know we can shut down the robot when it start annoying us or we get bored with it. So how is this typical human reaction to a favorite gadget going to change the world?