[ weird things ] | waiting for the dawn of artificial intelligence

waiting for the dawn of artificial intelligence

Intelligence seems to be an emergent property. But it doesn't emerge from processing speed or ability to store a certain amount of data...
wall-e

A popular theory says that when our computers and robots reach a certain processing speed, they’ll become self-aware and start changing in ways their designers could never foresee. A humble laptop just a decade or two from now, will start thinking for itself and industrial robots will start making creative decisions, modifying themselves to be more efficient at assembling cars or aircraft. Or maybe at eviscerating its former masters. Whatever strikes their fancy and seems like a better use for their newly found creative resources. But are we really on the verge of the thinking, self-aware machine as proponents of the Technological Singularity say?

The problem is somewhat tricky since their idea of self-aware machines rests on the measure of processing power. By itself, processing power only tells you how fast a computer or a robot could work with a fixed set of information. Some Singularity critics point out that we’re actually reaching the limits of how fast silicon based computer chips can work, hence the prevalence of dual and even quad core processors to keep up with the self-imposed prophecy of Moore’s law to which all computer manufacturers feel the pressure to conform. It’s what people expect, after all. However, we do have prototypes of technologies that could open up completely new frontiers of processing power, from carbon nanotubes to boron enriched industrial diamonds. We even have a theoretical basis for quantum computing. Processing power can still grow by leaps and bounds over the next several decades as these prototypes mature into fully fledged products.

Still, we’re back to the problem of whether having all that extra processing oomph is going to make computers start thinking for themselves. And the short answer is no. It doesn’t matter how fast or how complex the data being processed by the machine is, what matters is what’s actually being done with this data. Those actions are ultimately decided by the code and the code itself depends on what the analysts determine the machine needs to do and by what rules it will do it. When we build a software package or an operating system and let the computer do its thing, it’s just reading lines of code. The computer has no idea whether it exists or not, all the processing it does is electrical impulses traveling back and forth as it follows the lines of code it’s given. But to actually be self-aware the same way we are, knowing that we’re alive, we exist and why we do what we do are philosophical questions that can’t be translated into code.

We’re used to computers being able to do amazing things and make life very convenient for us. But when you sit on the other end of the process, working in IT and helping design the very systems that people tend to take for granted, the first realization you have is that computers need to have every single step of even the simplest process described for them in great detail in some sort of code. Without code, without instruction, they’re just a heap of plastic, silicon and metal no more useful than a paperweight. Self-awareness? They don’t even know how to display something on screen without painstakingly detailed sets of instructions. To jump from being a piece of circuitry, useless without code to a self-aware being with an intelligence on par with us is akin to the evolutionary leap from the first jelly fish like animals to humans, a 600 million year process which produced a brain with enough information for sapience. It’s very, very unlikely to happen anytime in the near future.

Of course, a couple of experts could make a machine seem like it’s self-aware. They could program it to react to dangerous objects by turning away or raising its limbs in defense. They could give it the ability to recognize faces or give it enough rules just vague enough to experiment with probabilities and come up with what looks like a unique and creative way of doing something. But in reality, all those outward signs are products of our intelligence, our attempts to see how far we can play with logic to come up with something creative. If we can give machines enough leeway in their programming, they’ll do something unexpected because we’ve tricked them into trying new things. But does a machine jumping through hoops we set up count as intelligent or as self-aware? The answer that to that is more philosophy than computer science. The problem is that we don’t really know what makes us intelligent or creative. How do we put something which we don’t understand in the form of computer code?

Believe it or not, but we actually have a test by which we can determine whether we may have an AI sustem on our hands. In a Turing Test, a real life person talks with another human and an AI via what computer scientist and mathematician Alan Turing would imagine as IM. If when asked to pick which one was the machine after several conversations, the person has just 50% accuracy, the AI is a good analog to human intelligence. But the controversial part of the Turing Test is that Turing thought if the AI could trick a human into thinking he was talking with another human, it must be intelligent. The big problem with that assertion has to do with how the computer tricked the human. If it was following painstaking rules of conversation programmed into it over the decades, all it did was put up a facade set up by its programmers. But if it was following a general set of ideas and coming up with its own permutations through a sophisticated computer system built to simulate human thought, then we may have succeeded in making a form of intelligence. And would that from of intelligence be self-aware? We’d need to develop another test for that…

# tech // artificial intelligence / computers / robots / technological singularity


  Show Comments