[ weird things ] | how to dismantle searle’s chinese room

how to dismantle searle’s chinese room

The problem with John Searle's Chinese Room thought experiment is that its premise is woefully outdated in computer science terms.

If you read enough about robots and artificial intelligence, chances are that you’ve heard of philosopher John Searle and his Chinese Room thought experiment. Once upon a time, I mentioned it in passing after looking at a paper which tried to mathematically define intelligent behavior, but it really does deserve more than a quick reference because it so perfectly illustrates the difference between philosophers and researchers, and between those familiar with the technology in question and those pontificating on it from the outside. Plus, if I was somewhat known for my Singularity skepticism and I intend to continue looking into the relevant topics, it provides a good reference for future posts. So what exactly is the Chinese Room Analogy you ask? Well, lucky for me, the good folks at The Open University’s animated mini-series 60 Second Adventures in Thought had a wonderful summary of it, so instead of reading a paragraph of exposition, you can just watch the video.

All right, so since a robot would follow if-then statements and could just spit out the right response with a big enough database, it doesn’t really understand language the same way humans do and is thus not intelligent in the same way as we expect humans to be. So what’s the problem here? Well, for one, Searle is famous for arguments against likening the processes of living brains to computation by arguing that the basic definitions of computation are just silly because any process with an input, decision, and output in that order can be just another form of computation. Why and how this is wrong, and why living neurons merit an exemption from the model of computation he hasn’t actually been able to explain. Singularitarians often call his argument vitalist, i.e. implying that living things have a soul and it’s that soul which grants intelligence to said living things while soulless computers are hence incapable of cognition, and they have a good point. As Searle is trying to figure out why a machine wouldn’t live up to an organic brain, he fails to consider how a machine with a very different approach to learning than a big series of if-then statements would go about his tasks.

With thousands of words in everyday usage and trillions of ways to combine them depending on context, it’s a fool’s errand to create all the quadrillions of possible if-thens or case-switches. The software would take over a few centuries to write and take hours to process a single input because it would need to parse every one of those conditions for if-thens or at least a few trillion of them for case-switches. Instead, if you want a machine to remember something, you use an artificial neural network which ends up associating words and contexts into a probabilistic model which responds to both individual words and how they’re arranged, trained within a relatively short time and using just a few well defined algorithms. Basic mathematical models for them were around since 1954 and by the time Searle proposed this thought experiment, the foundations of all the neural networks used today had been published six years prior. So really, his idea of a computer being spoon-fed a language was outdated when it was first made, and with his core concept flawed, the entire line of reasoning is no longer valid since it relies on the concept of an entire language neatly indexed for replies.

And there’s more to artificial neural networks here than simply pointing out that Searle did not appreciate what the technology he was critiquing could really do. You see, these networks are built of virtual neurons that take in signals and fire off to other virtual neurons when those signals cumulatively exceed a certain threshold. An actual neuron works in much the same way. When it receives enough stimuli to fire, it transmits its signals to other neurons nearby. So if said stimulus is an encoding of a word, it will fire through its network to trigger the appropriate contextual processing and response. Likewise, an artificial neural network can run though a cycle which evaluates the word and finds the associated set of responses to return. Now that we’ve gone this far in comparing human brains and artificial intelligence constructs, does the question of understanding language seem a tad more complex to you than Searle lets on? We associate words with experiences and stimuli, and so can custom-tailored artificial neural networks. Philosophers can wonder whether this comparison is fair all they want. Meanwhile, researchers will use them to build new machines.

# tech // artificial intelligence / cognition / computer science / philosophy

  Show Comments