looking for a.i., one cortex at a time…
Recently, researchers at IBM used a supercomputer to map the cortex of a cat’s brain in high enough detail to run a simulation of what happens inside a feline mind in response to stimuli. Human brains are next in line as the team works to create new approaches to computing which could closely mimic the flexibility of organic minds. If you listen hard enough, you might hear a steady buzz of excitement from transhumanists. Obviously, a real incarnation of artificial intelligence must be on the horizon and will begin to emerge as soon as we can simulate human brains in real time. Hello machine consciousness and maybe even mind uploading that will let transhumanists break through the limitations of their fleshy bodies and hatch in a cybernetic form, right?
If you’ve been reading this blog for any length of time, you probably know that my answer is no. Computers, for all their potential power, are still tools for visualization, data collection and automating complex tasks we can’t perform in an efficient, time and cost-effective way. But the idea of artificial intelligence arising from simulating human brains is a persistent one which is my abbreviated take on it for Discovery Tech was met with plenty of criticism from its proponents. Their view can be summarized in the following comments…
When the approximation of what is actually happening in the brain reaches a point that there is no significant difference between the two then we have created an artificial brain. Consciousness will necessarily be created when we reach that point. […]
[The comparisons of biological and computer systems] can’t be used to conclude anything about consciousness without already assuming that only a biological brain can support the process of consciousness (which would be circular and is a religious assumption).
The same argument was also brought up by Michael Anissimov on my skeptical look at the ideas behind a transhumanist look at the Technological Singularity concept. So if we were to take the main points of this view and sum them up, we’re left with the idea that because our brains are just a collection of data and processes, when we mimic them with a vast array of CPUs, the end result should be the same kind of data manifesting itself into a fully fledged consciousness, provided we have enough knowledge about how the brain works on a deep enough level. Any objection is then summarily dismissed as giving the brain more relevance that it really has and “religious in nature” since it sees human consciousness “as more than just data” to be simulated by a powerful enough machine.
Truly, when you have a hammer, all your problems look an awful lot like nails. The same happens when your realm of expertise is software. Computer science lets you work on so many things that you might think they’re capable of doing pretty much anything if you make them powerful enough and give them the right code. And it seems to be exactly what’s happening here, coupled with a simplified mind/body dualism which sees what’s going on in the brain as little more than data just waiting to be replicated in or even transferred to a compatible location. It’s all just a question of format. But here’s where we find our first major problem with this view. Data in the computer world is abstract. It can live on compatible machines because it’s designed to be shared and exchanged. It was purposefully intended to be a collection of useable objects that makes its way through the networks connecting computers able to read it and display its contents in an understandable form. Not so in the case of our messy, organic brains.
Rather than being designed, our brain evolved its methods of data storage and whatever comes in our brain gets processed by a system that doesn’t exchange it with the outside world except for tiny snippets expressed in a geographically and culturally specific language. It’s not just the data, it’s the wiring itself and all the biology that makes it happen which result in things like personality and consciousness as a response to stimuli from the world around us and the initial instructions from our unique genomes. The IBM team working on the feline brain model knows this full well and is studying the general methods of how minds tackle problems though a statistical model of what goes on in a cortex. The end result won’t be a cat brain or a cat consciousness, but a readout of which structures are involved in certain scenarios and under a particular set of stimuli along with the details of what neuron clusters were at work and for how long. The actual chemical reactions that decide on an action or think through a problem don’t take place and the biological wiring that’s the crucial part of how the whole process takes place isn’t there, just a statistical approximation of it.
So why can’t just simulate whatever we need in a computer and watch intelligence appear on its own? It’s not that simple. Even the best approximation of the human brain is a collection of code and general rules which would require constant stimulus being fed to it over years and programmed to grow and change. But what do you get in the end for all your trouble? A very detailed readout which we can try to interpret in a way that makes some sort of sense to us. A digital construct that needs puppet masters to trigger every single event isn’t what we could honestly call self-aware and because these digital signals is as far as computers could go, we’d be stuck with a huge, complicated program that returns bursts of data when we run an instance of it. Running a real time 24/7/365 simulation of a human brain with all the bells and whistles and artificial stimuli would be a case of extreme engineering decadence. And the end result? A digital puppet reflecting basic rules set by the development team and enslaved by lines of code into potentially tricking outside observers.
I’m sorry but just simulating everything isn’t a solution. It’s more of a mantra. Before you start talking about the glorious future for computer consciousness, just stop for a second and imagine how you’d have to set up an accurate simulation of the human brain, how you have to trigger its responses and what you’ll have to do with the output. If we consider the fact that the IBM team mapped a cat cortex, why wasn’t the simulation run in real time and produce a feline AI for the few minutes the simulation was running? Because they’d just get similar data about 100 times faster and all that would do is test their computing speed. And even then, the simulation had to show some sign of independent thought when it was run in slow motion. If that would’ve happened, do you really think the developers at IBM really wouldn’t give it any attention? Maybe just invoking modeling as the ultimate solution for everything regarding AI doesn’t solve every concern with trying to give our computers the capacity of living things without giving them the biological parts they would need to achieve real cognition and self-awareness? Maybe instead of just dismissing practical, scientific considerations as an asinine religious canard, it may be a good idea to try and visualize how something like this would be built?
update: as it turns out, the lead of the Blue Brain supercomputing project, Henry Markram, went nuclear after getting wind of IBM’s announcement and declared that what was really created is an oversimplified pretense of something that vaguely resembles a cat cortex instead of a supercomputer able to simulate the brainpower of a real feline. Could this be just a case of sour grapes and a lack of professional respect? Without seeing a list of functional requirements and dev documentation, I can’t say, but it may be possible that IBM’s simulation is not exactly the cat’s meow, unlike their press release would lead us to believe…