For those who are convinced that one day we can upload our minds to a computer and emulate the artificial immortality of Ultron in the finest traditions of comic book science, there’s a number of planned experiments which claim to have the potential to digitally reanimate brains from very thorough maps of neuron connections. They’re based on Ray Kurzweil’s theory of the mind; we are simply the sum total of our neural network in the brain and if we can capture it, we can build a viable digital analog that should think, act, and sound like us. Basically, the general plot of last year’s Johnny Depp flop Transcendence wasn’t built around something a room of studio writers dreamed up over a very productive lunch, but on a very real idea which some people are taking seriously enough to use it to plan the fate of their bodies and minds after death. Those who are dying are now finding some comfort in the idea that they can be brought back to life should any of these experiments succeed, and reunite with the loved ones who they’re leaving behind.
In both industry and academia, it can be really easy to forget that the bleeding edge technology you study and promote can have a very real effect on very real people’s lives. Cancer patients, those with debilitating injuries that will drastically shorten their lives, and people whose genetics conspired to make their bodies fail them, are starting to make decisions based on the promises spread by the media on behalf of self-styled tech prophets. For years, I’ve been writing a lot of posts and articles explaining exactly why many of these promises are poorly formed ideas that lack the requisite understanding of the problem they claim to understand how to solve. And it is still very much the case, as neuroscientist Michael Hendricks felt compelled to detail for MIT in response to the New York Times feature on whole brain emulation. His argument is a solid one, based on an actual attempt to emulate a brain we understand inside and out in an organism we have mapped from its skin down to the individual codon, the humble nematode worm.
Essentially, Hendricks says that to digitally emulate the brain of a nematode, we need to realize that its mind still has thousands of constant, ongoing chemical reactions in addition to the flows of electrical pulses through its neurons. We don’t know how to model them and the exact effect they have on the worm’s cognition, and even with the entire immaculately accurate connectome at hand, he’s still missing a great deal of information on how to start emulating its brain. But why should we have all the information, you ask, can’t we just build a proper artificial neural network reflecting the nematode connectome and fire it up? After all, if we know how the information will navigate its brain and what all the neurons do, couldn’t we have something up and running? To add on to Hendricks’ argument that the structure of the brain itself is only a part of what makes individuals who they are and how they work, allow me to add that this is simply not how a digital neural network is supposed to function, despite being constantly compared to our neurons.
Artificial neural networks are mechanisms to implement a mathematical formula for learning an unfamiliar task in the language of propositional logic. In essence, you define the problem space and the expected outcomes, then allow the network to weigh the inputs and guess its way to an acceptable solution. You can say that’s how our brains work too, but you’d be wrong. There are parts of our brain that deal with high level logic, like the prefrontal cortex which helps you make decisions about what to do in certain situations, that is, deal with executive functions. But unlike artificial neural networks, there are countless chemical reactions involved, reactions which warp how the information is being processed. Being hungry, sleepy, tired, aroused, sick, happy, and so on, and so forth, can make the same set of connections produce different outputs from very similar inputs. Ever had an experience of being asked to help a friend with something until one day, you got fed up that you were being constantly pestered for help, started a fight, and ended the friendship? Humans do that. Social animals can do that. Computers never could.
You see, your connectome doesn’t implement propositional calculus, it’s a constantly changing infrastructure for exchanging basic functionality, deeply affected by training, injury, your overall health, your memories, and the complex flow of neurotransmitters floating between neurons. If you bring me a connectome, even for a tiny nematode, and told me to set up an artificial neural network that captures these relationships, I’m sure it would be possible to draw up something in a bit of custom code, but what exactly would the result be? How do I encode plasticity? How do we define each neuron’s statistical weight if we’re missing the chemical reactions affecting it? Is there a variation in the neurotransmitters we’d have to simulate as well, and if so, what would it be and to which neurotransmitters will it apply? It’s like trying to rebuild a city with only the road map, no buildings, people, cars, trucks, and businesses included, then expecting artificial traffic patterns to recreate all the dynamics of the city the road map of which you digitized, with pretty much no room for entropy because it could easily break down the simulation over time. You will both be running the neural network and training it, something it’s really not meant to do.
The bottom line here is that synthetic minds, even once capable of hot-swapping newly trained networks in place of existing ones, are not going to be the same as organic ones. What a great deal of transhumanists refuse to accept is that the substrate in which computing — and they will define what the mind does as computing — is being done, is actually quite important because it allows the information to flow at different rates and in different ways than another substrate. We can put something from a connectome into a computer, but what comes out will not be what we put into it, it will be something new, something different because we put in just a part of it into a machine and naively expected the code to make up for all the gaps. And that’s for a best case scenario with a nematode and 302 neurons. Humans have 86 billion. Even if we don’t need the majority of these neurons to be emulated, the point is that whatever problems you’ll have with a virtual nematode brain, they will be more than nine orders of magnitude worse in virtual human ones, as added size and complexity create new problems. In short, whole brain emulation as a means for digital immortality may work in comic books, but definitely not in the real world.