[ weird things ] | looking into the dark depths of a.i.

looking into the dark depths of a.i.

Defining artificial intelligence is a very difficult problem, both philosophically and mathematically. So it's little wonder some people interested in AI want to take some shortcuts...
stargazing wall-e

Recently, an article in Wired profiled a rather bizarre story from the world of AI experiments which involves an extremely passionate amateur and an academic who ran parallel experiments on common sense knowledge bases and committed suicide in eerily similar ways. Since I’m not a clinical psychologist, nor did I know either Chris McKinstry or Push Singh, I’m not going to comment on anything personal. Instead, I wanted to cover the working theories they were perusing: the concept that intelligence is just a collection of simple working parts, something that can be cobbled together from libraries of common sense knowledge. Basically, the way their research seemed to be going, McKinstry and Singh were trying to conjure something like Wall-E from just shy of a million or so true/false statements or logical propositions submitted to and collected in vast databases.

We’ve seen a very similar idea from futurist Nick Bostrom, who seemed pretty confident that big knowledge bases were a key ingredient in creating AI capable of superhuman intelligence and as was reviewed, a huge collection of facts that can be subject to change do not an intelligent agent make. This notion is also in sharp contrast with the most promising areas of AI research: frameworks based on probabilistic models to more accurately represent the way brains learn and find patterns in random data. As complex as it may sound, this is really not that far removed from the idea that intelligence is what emerges when disparate and very simple parts work in unison, something AI pioneer Marvin Minsky postulated in 1985. And while we still have trouble identifying what intelligence actually entails in a mechanical object, we have no evidence that even the kind of intellect we can recognize are anything other than a collection of parallel and interlocking discrete functions. In fact, this is the idea underpinning the concept of modeling AI from insect brains and even the most out-of- left-field attempts at weaving quantum mechanics into intelligence follow this principle.

However, as we’ve seen in the aforementioned discussion of AI models based on probabilities and degrees, a good deal of intelligence and creativity deals with an absence of rules and knowing that there are exceptions to many of the things we know to be true. The sky is blue, except on cloudy days. Coffee is served hot, except a number of coffee based drinks designed to be served chilled. Drinking a lot of alcohol will lead to intoxication, except different people have different tolerances. We could fill just as big of a library with these loopholes and exception as McKinstry and Singh filled with true or false and propositional statements. There’s even a formal mathematical representation for these concepts. But there’s even more to the task than that. Rather than soak up supposedly commonsense knowledge fed to them via databases, an AI system must actively want to seek out answers to questions to which this knowledge leads. That’s one of the true signs of intelligence: curiosity. It was part of the famous Turing test which would judge how intelligent a computer system is based on how it would interact with humans. For contrast, consider McKinstry’s benchmark for AI…

He developed an alternative yardstick for AI, which he called the Minimum Intelligent Signal Test. The idea was to limit human-computer dialog to questions that required yes/no answers. (Is Earth round? Is the sky blue?) If a machine could correctly answer as many questions as a human, then that machine was intelligent. “Intelligence didn’t depend on the bandwidth of the communication channel; intelligence could be communicated with one bit!” he later wrote.

How intelligent would you find a library of simple yes/no answers compared to a machine that could ask you a pointed question or offer a meaningful solution to a problem you have with another human? Plug in any of the questions above into a search engine and you’ll get these answers and far, far more. What McKinstry decided to create was a prototype of Wolfram Alpha, and even with the wealth of data contained in that system, we just can’t consider it intelligent in any way, shape or form. In the Turing test, to accurately fool a human, a computer would have to at least feign curiosity by asking for elaborations, details, and trying to draw conclusions from a number of relatively fuzzy concepts in order to keep the conversation going. Even though it’s not without flaws, the Turing test can be a fairly good way to seek out patterns and behaviors we would expect from an intelligent agent if done with enough sophistication and rigor. McKinstry, and to a lesser extent Singh, substituted it for a lesser benchmark that was missing much of what we consider to be traits of true intellect. After all, it’s not just how much you know that matters, it’s how this knowledge is sought, elaborated, and applied.

# tech // artificial intelligence / computer science / computers


  Show Comments