when science fiction and computer science meet

September 1, 2010

Science fiction writer Ted Chiang did a good deal of research into artificial intelligence, particularly the kind of general knowledge, omni-AI system which I've been labeling completely uneconomical whenever I mention it in any practical context. And in a post about his inspiration on the subject, he outlines exactly why a custom, learning, adaptive artificial intelligence system designed to do anything and everything is bound to be grossly impractical, not just from a philosophical standpoint, but from a logistical point of view as well. It takes far too long to actually build it, then train it to do whatever it is you want to do. Considering that even humans can't do everything and at some point in time we need to specialize in a rather narrow area of skill and expertise, you'd have to devote decades upon decades of training your fantastic machine to do something really impressive.

Teaching machines is really nothing new, and there are plenty of ways to get robots and computers to make the decisions you need them to make, at least for problems involving things computers are built to do, things like building complex probabilistic models and crunching numbers. But when it comes to things humans can do as organisms, computers tend to sputter. Without the mechanism to learn very quickly through a repeated pattern of trial and error in each area they try to master, they may find a way to move around in a lab maze, but not so much in the real world, where they deal with new stimuli and interference they simply weren't designed to work around since it's so common sense to us, we forget to account for them. As Chiang summarizes…

[N]avigating the real world is not a problem that can be solved by simply using faster processors and more memory. There’s more and more evidence that if we want AI to have common sense, it will have to develop it in the same ways that children: by imitating others, by trying different things and seeing what works, and most of all by accruing experience. This means that creating a useful AI won’t just be a matter of programming, and although some amazing advances in software will definitely be required; it will also involve many years of training. And the more useful you want it to be, the longer the training will take.

That's pretty much spot on, with an added bonus of noting that simply speeding up training sessions isn't an approach we could take with general artificial intelligence. Though he's wrong that we're not even close to the kind of robot that could walk into the kitchen and make you eggs in the morning (because we already have a few that fetch beer on command), and his reasons behind why speeding up trials wouldn't work have several major problems (we can't compare processors to neurological limits of our bodies), his initial statement is a valid one. Trials in the real world take a certain amount of time and you have to be thorough to train a robot to do what you need it to do. The experiment has to be set up, the code compiled after the latest tweak, and the execution itself has to take a certain amount of time. Afterwards, you have to do an analysis of what went right and what went wrong, tweak the code, debug and re-compile it, re-set your experiment, and so on.

And all this is costing some very serious cash. While you spend decades whipping your AI into shape, who's to say that your funding won't be cut in another financial disaster? What happens if people who originally built the system leave to do other things? Who's going to be in charge of all this general training that will last more than some people's entire careers? It's much easier and cost-effective to build specialized intelligent agents which are trained to do a few specific tasks quickly and extremely well. Then, maybe at some point we could combine them into something impressive, bringing together mobile system, rules-based and probabilistic AI, and natural speech recognition software to help us process huge reams of complex data on the fly, but even there, our hypothetical homunculus would have to be trained to focus on specific tasks rather than try to be an omni-app that needs non-stop training to keep up with the humans around it.

[ illustration by Felix, aka ReginaldBull, story via John Dupuis ]

Share
  • Pierce R. Butler

    Then, maybe at some point we could combine them into something impressive, bringing together mobile system, rules-based and probabilistic AI, and natural speech recognition software to help us process huge reams of complex data …

    Reportedly, Arthur C. Clarke named the wayward computer in the movie script for 2001: A Space Odyssey “Hal” as a nod to the non-existent field of Heuristic ALgorithms, an indication of the work necessary for such a gizmo. As the story turned it, there were even then still some glitches in the system…

  • Paul

    It takes decades to train the first all-purpose super-AI; the second, mere moments.

  • Greg Fish

    Paul, if you simply want to create more instances that do the exact same thing as your primary AI, then sure, you could just copy all the data from the primary system to your clones and boot them up. But why would you need exact copies of your all-purpose AI? They would all just end up doing the same exact thing as the first one and the benefit of learning (findinding your own ways of doing things) would be negated.

  • Paul

    (Been away.)

    Copies don’t have to do the same job as the original, just have the same capabilities.

    I think Chiang’s premise is based on the AI equivalent of the old 50’s computing notion of a single super-system. (Think Asimov’s “The Last Question”.) It’s the idea that as computers became more powerful, more general purpose, they’d become more centralised. In reality, as computing became more general purpose, smaller systems multiplied faster. Why wouldn’t AI do the same?

    IMO, even if we have the ability to construct a general super-AI, the ability to copy already-trained AI’s will evolve into a modular approach anyway. Just to save time, and keep things more understandable. Once you have one Language-AI, you shove it in everything, why language-train every specialist AI?

    (Of course, this all treats the AI as a device. If it is human style self-conscious, it’s ethically messy.)

  • Paul

    Replying to myself… lame…

    Thinking about this topic has solved a problem I’ve always had with popular sci-fi. The technologically stable, near-human-intelligence robot-servant. (Robby-the-robot, C3P0.) It never made sense. Given the exponential increase in computing power, 15 years after we reach 1/1000th of human intelligence, we can have human-level AI. 18 months later double. 15 years later, 1000 times. Worse, assuming different human abilities take different times to develop, by the time we crack the final human skill, every other skill will already be miles ahead. (Just as your computer can perform arithmetic billions of times faster than you, even while it doesn’t understand natural language. A robot that understands philosophy enough to debate with, will have millions of times your skill in language and rhetoric.)

    There would never be a period where a robot would be roughly as smart as a dumb human or smart animal. Bye bye sci-fi fantasies.

    But it suddenly occurred to me, if each “module” is essentially a full AI, then the number of AI-modules running on a single device depends on the power of the hardware. Obviously. But this is such a wasteful use of computing power, it would delay the apparent emergence of human or super-human level AI.

    In the same way that our increasing user-interface and media requirements suck up more of our exponentially increasing computing power, and the capabilities of our systems don’t seem to increase exponentially.

    So C3P0 might have enough computing hardware to run a super-human artificial intelligence, but instead he is constructed from thousands of sub-human AI’s running together, providing motor skills, visual perception, language, etc.

  • Greg Fish

    “Copies don’t have to do the same job as the original, just have the same capabilities.”

    For computers, a copy is basically a clone of whatever file or process you were using. It does the exact same thing by definition. If you want to take advantage of an existing AI system, you would implement it as a service, then latch on to it with the next system to take advatantage of its capabilities.

    “A robot that understands philosophy enough to debate with, will have millions of times your skill in language and rhetoric.”

    Methinks you’re comitting the fallacy of Singularitarians who like to tie processing speed to the potential for intelligence and then use it to project the potential skills of an AI system linearly, when your projection should be more of an S-curve.

    What does it mean to be a million times better than someone in a debate? Were you more correct in your facts to the millionth decimal point while the other person just rounded to the nearest integer? Did you get a million more facts right? Were you a million times faster in speaking, and if so, how did anyone understand you? Human skills involving memory and computation can be easily bested. But those requiring fuzzy logic and making a choice between multiple viable solutions with an eye on long term planning is something that even the top expert systems won’t really be able to do better than us.

    Superhuman AI is more of a science fiction construct than a measuarable benchmark for software and hardware since how smart an AI system would be compared to a human would depend on a human to which you’re comparing it and the context the comparison takes place.

  • Paul

    “For computers, a copy is basically a clone of whatever file or process you were using. It does the exact same thing by definition.”

    Yes. But I meant if you spend ten years training a car-autopilot-AI, you can copy it into a million robot-cars. You don’t need to train each car for ten years. And you only need a small amount of re-training for each new car model you develop, you are not starting from scratch each time.

    (Analogy: You are doing an IT degree? Presumably you were capable of other studies given your level of high-schooling. So copies of you could be taken in different directions. There is no need to take each copy back to an infantile state and start again.)

    “Methinks you’re comitting the fallacy of Singularitarians […where AI = k x CPU]”

    Yeah, I was going with their assumption, for argument’s sake.

    “when your projection should be more of an S-curve.”

    Is that because AI will be a network analogue, where doubling the number of nodes requires squaring the process-cycles-per-second? Or for another reason?

    “What does it mean to be a million times better than someone in a debate?”

    It means u loose adn i r teh winz0rz!!!1 … Ahem

    I was referring to the apparent non-linear development in computing abilities. I’ll use a different example: I don’t have a link but you’ve probably already seen some of the new robot systems that are more agile than humans. They can genuinely track objects, and move a manipulator in response, faster than any human. But only in a limited environment. So by the time they equal humans in the last category of object-recognition, or in a truly open environment, they will have already leapt far ahead in every other category, environment or ability.

    There will never be a moment where bots are roughly as good as humans. So the classic sci-fi servant robot will be impossible, even if true-AI becomes possible.

    (“True-AI”, “Super-AI”… I’m using the term AI to mean general purpose artificial minds. Since you are in the field, you use it to refer to current actual developments (Like expert systems.) So there’s some arguing at cross-purposes here.)

  • Ron Hale

    Wasn’t it about 20 years ago that Roger Penrose penned “The Emporer’s New Mind”? True, he was more concerned about artificial consciousness than intelligence but gave an exhausting argument against true AI based on his knowledge of physics, math and Quantum Mechanics…..

  • Greg Fish

    “I meant if you spend ten years training a car-autopilot-AI, you can copy it into a million robot-cars. You don’t need to train each car for ten years.”

    Well, yes. But here we’re talking about an intelligent expert system rather than a broad, generic AI which was the focus of the post. And like I said, you could always implement this system’s logic as a service and simply tap into it with new apps.

    “Is that because AI will be a network analogue, where doubling the number of nodes requires squaring the process-cycles-per-second?”

    Not exactly. The reason for the S-curve is the limit of how far you can push something before it reaches a limit to how well it performs a task.

    “so by the time [robots] equal humans in the last category of object-recognition…”

    … they will be able to react faster to a visual stimulus to do what we want them to do. Being instantly able to recognize objects is about as good as you can get in that area. Where would they go from there other than into the realm of precognition?

  • http://planilhasinteligentes.com.br Rui Svensson

    Sempre haverá algum visionário que olhará para as coisas, e pensará que pode ser tudo diferente.

    Para mim, ficção científica tem a ver com CIÊNCIA, não com FANTASIA. Por isso, muitos dos filmes e seriados que embalaram os sonhos da juventude passada e presente, me desculpem, mas tem muita fantasia misturada.

    Viagem no tempo, teletransporte, manipulação mental, tudo isso não passa da indústria do livro e do cinema se aproveitando da falta de cultura científica dos leitores para enriquecer. Quando o público for mais culto, quando as pessoas souberem porque a vida existe, porque o universo é assim, quais são os limites da ciência e da tecnologia, os escritores de FC terão de ser mais exigentes com os frutos do seu trabalho.

    Também sou muito cético quanto a muitas das hipóteses e teorias aventadas pela ciência moderna, porque me dei ao trabalho de estudar mais que o normal sobre os porquês, e hoje tenho algumas teorias próprias sobre alguns dos assuntos que intrigam as pessoas. Mesmo não sendo cientista de profissão.

    Quem tiver um ponto de vista diferente, pode me retornar, estou aberto à discussão.