why you’re probably intuitively wrong

November 13, 2009 — 15 Comments

Back in May, I wrote a post about the myriad of problems with the concept of mind uploading and contrasting the differences between human brains and computers. As you can probably imagine, Singularitarians weren’t very happy with the argument and the president of the Singularity Institute, Michael Vassar, tried to prove me wrong during one of our public debates. And recently, Singularity blogger and developer Jake Cannell threw in his thoughts about mind uploading with a thought experiment he says proves that mind uploading really is possible and your brain could be either turned into a machine, or transferred over to one in the eventual future.

cg neuron

Believe it or not, I’m still not swayed, especially when considering that his thought experiment involves some pretty radical assumptions that would keep most designers and engineers up at night all by themselves. We’ll start with his premise and work our way down to the details, so settle in for a little computer nerd fight…

The gradual neuron replacement scenario is a thought experiment which intuitively shows why the functionalist-materialistic view of mind is correct.

The scenario in question is of course the idea of replacing every neuron in one’s brain with a nanobot which is designed to function just like normal, everyday, natural neurons. Aside from the fact that this would require an incredible level of understanding of how our brain works and what every single neuron in it does, can anyone spot the problem with this statement? That’s right, it’s the intuitive part. Your intuition is a terrible judge of how correct a certain concept is. In fact, many things in science shown to be correct over centuries of experiments, observations and data are actually quite counterintuitive. Do you feel like you’re on a sphere spinning around its axis at over a thousand miles per hour? Does your intuition tell you that you’re hurdling through a mostly empty, vast cosmos? Probably not. Which is why you shouldn’t rely on it in science or technology.

If you really truly believe mind uploading is not possible…

Stop right there. I don’t “believe” that it’s not possible but instead, I’m well aware of the challenges involved as well as where our current technological achievements stand. This is not something I just randomly woke up and decided, but deduced from my experience and education.

The nanotech neurons are functionally equivalent. They [will] connect to the same synapses of the original neuron, and they perform the same functional roles. Replace one neuron with a nanotech equivalent, and nothing changes – the rest of the brain doesn’t notice. Replaced one by one, your entire brain could be replaced with nanotech, but you would have the same information content, you would think the same thoughts, etc.

How do we know that? Has there ever been an experiment like this when we’ve been able to conclusively look and decide that replacing your entire brain with nanobots wouldn’t change a thing? Since we’re imagining a brand new type of technology, it’s pretty safe to say that no, there’s hasn’t. And since we’re in imagination land, we can make our technology do anything we want and however we want it to. It’ll work perfectly in our mind but what happens when those designs are put on paper and turned into tangible machinery? So we can imagine all we want, but that’s not the challenge. The challenge is making this dream tech work in the real world.

Now imagine if these nanotech devices allowed you to interface with computing systems, transfer all of their internal state, and so on. Your body could die, but the nanotech devices could continue functioning, even be removed from the skull, reform your brain in some other device, and connect you to a virtual reality environment – ala the matrix.

Again, yes, that all sounds plausible. But this is sort of like saying that when we build warp drives we’ll be free to roam the universe. The technology doesn’t exist yet, building it is a problematic endeavor and even if we do build it, there are major operational challenges to overcome and the whole thing could collapse into a black hole on us when there’s a snag. We’re still in the dream world, not the reality we inhabit.

If [materialistic] functionalism is wrong, one of the following must be true: 1. at some point during the neuron replacement you lose [your conscious mind] (as stated this is impossible because the successful neuron replacement wouldn’t change brain function – so for this to be wrong you must choose a worthless definition of consciousness)

So wait, it’s impossible for the designers to make a mistake, or for defective cyber neurons to wreak chaos on the organic parts of your brain? It’s not like bugs or design flaws never happen. If they didn’t, IT would be a far, far smaller field that it is today. The imminent success of the replacement is hinged on our dream world of perfect technology and the argument is doubly rigged by telling us that any problems would be caused by new and “worthless” definitions of consciousness. So I’m talking about bugs and gaps in the required knowledge to make this all work and Mr. Cannell is wafting in the clouds of theoretical views of consciousness. But such is almost always the case with Singularitarians.

2. you become a different, new conscious person (again impossible unless you use a worthless definition of consciousness and identity)

And yet again, for him to be wrong, we need to choose some stupid, worthless, meaningless way to define an amorphous and hotly debated concept of consciousness. What about neuron replacement mistakes acting a lot like the kind of brain damage that changes the personality of some brain trauma victims? That’s not worth any merit from either a medical or technical standpoint?

3. the gradual neuron replacement is somehow not allowed by the laws of physics (not true from current theory). All 3 are untenable, and so functionalism is correct. Thus uploading is possible.

The last point is correct. We can replace neurons and there’s no law of physics or biology that will stop us. But the statement that the two events above are untenable verge on the ridiculous since the justification for them is a nonexistent technology we have to imagine, and imagine that it works perfectly. On top of that, we have to be completely resigned to the absolute statement that when we replace all the neurons in a human mind, we will get the same exact human mind at the end. In the real world, this simply doesn’t pass the smell test. Nothing here proves that mind uploading is physically possible and will really work when we try to do it via replacing all the neurons in our brain. Not only that, but this argument has the zeal of a religious proclamation in which faith in technology trumps the very real dangers and concerns of human error when putting it together and renders his main point virtually impossible to prove by an experiment.

Share
  • ColonelFazackerley

    Right on! Plausibility is important at the start of a scientific explanation and possibility is important at the start of an engineering project. However, there is more to it. Singularitarians need to stop blathering and do some experiments.

    They have no idea what the specific challenges would be in creating artificial neurons and no possible way of estimating the time required to overcome the challenges.

  • http://www.mazepath.com/uncleal/ Uncle Al

    Truths need not be believable, they merely self-consistently exist. Lies must be believable. Consequently, lies are usually much more believable than the truth. Everything else is footnotes and Accounts Receivable.

    Knowledge, personal assets, and freedom of action are the foundations of terrorism. Washington is pledged to Homeland Severity eliminating all of them. Try boarding a plane with a flash drive.

  • http://reasonpowerpolicy.blogspot.com/ Robert Johnson

    I’m totally oversimplifying it I know, but if the hardware is the person, why aren’t identical twins the same person?

  • Greg Fish

    “… why aren’t identical twins the same person?”

    Their brains are wired differently and it’s that wiring we’re trying to capture in this concept. If you mess up the wiring, you’ll most probably do a whole lot of damage. To pretend that a totally new type of nanobots are going to work without a hitch and function like the actual neuron in a human brain is just unrealistic wishful thinking.

    “Try boarding a plane with a flash drive.”

    I have. And with computers. And cameras. And even bluetooth devices. No problem. No one with the TSA even blinked in my direction. I have no idea what this, or politics, have to do with a post about neurology and computer science.

  • Pierce R. Butler

    Jake Cannell – If we had some ham, we could have ham and eggs, if we had some eggs.

  • http://wading-in.net/walkabout Just Al

    Wow! That original post from Jake Cannell is some serious pie-in-the-sky stuff! I wonder if it occurred to him at any time that every last bit of his premise relied on “imagine that this is so?”

    That’s about as useful as ontological arguments, and just as flawed. And worse, it relies entirely on the idea that the brain works as imagined, and that nanotechnology could even come close to functioning as neurons. Hell, your warp drive exposition at least relies on known physics – the nanobot idea hasn’t even gotten that far. This has nothing to do with theory – it’s strictly philosophy, if not outright fantasy. They might as well spend time arguing about angels on the point of a pin.

    Here’s the point I find most amusing, however: If we were to actually achieve the technology to understand the function of each and every neuron, why bother screwing around with nanobots? Plot that info directly into that external neural matrix. What possible point does replacing neurons with an exact (heh!) machine replica achieve? That’s like replacing sand grains on the beach with carbon-fiber Sandeeta(tm).

    A suggestion, to those who think that brain replacement/infinite neural lifespans are a great idea: you might be better off getting human brains working up to par first. No one needs a perpetual idiot.

  • Greg Fish

    “If we were to actually achieve the technology to understand the function of each and every neuron, why bother screwing around with nanobots?”

    I’m actually going to jump to Connell’s defense on this one. Trying to plot the exact function of a brain into an external matrix would only work in Ghost in the Shell, so you would need to get the actual flow of information and signals in the right format before trying to plot them over.

    Otherwise, how would you transfer the weak electrical hum of the brain over to a computer network?

  • Philip

    There are only two sides of a much larger field of possibility being discussed here, and its unfair to conclude the possibility of something so abstract as “uploading” on such a flimsy basis.

    For example:

    1. A non-nanoscale device could be constructed to function as a portion of the brain. Such a device could be connected to a much larger machine that performs its functions.

    2a. Given that the brain can loosely be divided into two sections: the “automatic” or hardwired section that controls things like metabolism or eyesight, and the flexible, learning section that governs things like memory and habit (the sum of all that you have learned), one could surmise that it is only this memory/habit section that truly defines who we are. The rest is just stimulus from our instinctual side. Artificial neurons would *not* need to be wired exactly. The brain’s memory relies on a system whereby the loss of a single neuron does not significantly degrade function. If the process occurred over enough time, the artificial neurons could wire themselves to mesh with our existing memories. Anything one was reminded of over the course of the process, one would retain.

  • Greg Fish

    “… its unfair to conclude the possibility of something so abstract as ‘uploading’ on such a flimsy basis.”

    I totally agree which is why I don’t even try to rule on the actual possibility, only give an opinion on the quality of the thought experiment and the practical considerations as far as my experience with computer science is concerned.

    “… one could surmise that it is only this memory/habit section that truly defines who we are.”

    Yes, one certainly could but we need to remember that the division is conceptual. How the brain is actually wired and where the information we want to deal with lives, is a very complex matter.

    But you do have a very good point, so please don’t mind my technical objections. I know they’re low level implementation concerns but they do matter to someone who wants to derive functional requirements for new software and hardware.

  • Nagarjunary

    The probability of perfect functional equivalence tends to 0 as the degrees of freedom grow in the system to be emulated.

    A heart can be marginally functionally emulated with an artificial heart, but the complexity of the original’s construction implies various knock-on effects (i.e., emergent effects) that will always create some kind of slippage between the original’s and the simulation’s functionality. Thus, an artificial heart imposes different constraints on the organisms behavior than would a real heart (and even a real heart transplant imposes behavioral constraints). Perhaps an artificial heart may day fully emulate the behavior of a real heart, but that possibility is highly unlikely.

    Emulating the behaviors of a neuron, even to very high tolerances, would very likely introduce slippage as the complexity grows. Notwithstanding the engineering difficulties (how would the support functions of glial cells be replicated when dealing with artificial neurons?), if indeed such a proposition were tenable, the likelihood that it would cause unforeseeable changes to the consciousness itself would border on the unethical. At the very best, it would be consciousness similar to what we know as consciousness, but more likely a fragmented and constrained consciousness — and at worst, a warped consciousness akin to Frankeinstein’s monster. Best to tread lightly.

  • Frank Grove

    Your biggest argument seems to be that there will be mistakes made, and I have no doubt that will be the case. Theres might be a bug here or there, heads might explode, people might die. But exploration of the frontier of knowledge requires sacrifice and I can assure you there are those that will gladly make that sacrifice. And the engineers will learn from their mistakes. But let’s get into the other side of your argument, that the ‘exact’ same person comes out on the other end is doubtful. I completely agree with you. However are you the ‘exact’ same person that you were 1 second ago?

    I think the idea of developing a functionally equivalent, the ‘exact’ copy, of the state of the human brain is dependent upon the existence of a static process in which to emulate. However the brain depends on literally trillions of stochastic processes from which consciousness emerges, and none of these processes are overly important to the final expression of being that which you and I describe as consciousness. You make the false assumption that the current state is the only possible description of that being, and ignore the temporal aspects of that consciousness. Our thoughts, feelings, memory and learning all occur within both time and space. As such both the wiring of the cortical neurons as well as their axonal synaptic conduction delay is of prime importance to this learning and memory. And that delay is actually a series of electrochemical pathways through which information processing emerges, given enough complexity in the network. So while there will likely be errors at any one of these levels in a transfer, the brain has already evolved to maintain itself when these sort of errors occur naturally.

    Humans are essentially a complex network, in both our genetic and neuronal structure. This both makes understanding how we function more difficult, but more importantly gives our bodies and brains a robust structure. This is bad when trying to fight cancer, but good when someone has a stroke. For this reason I believe your complaint about bugs and errors is relatively misplaced, especially when you consider the uploading process occurs over time. Since the human will certainly still be alive, the neuronal processes will continue unabated. As the individual neurons are replaced by their mechanical doppelganger, the existing network will adapt in the needed ways to accept that new process within the whole. If ‘bugs’ were introduced in the wiring of the network, the brain’s own robustness would simply overcome them and rewire itself as necessary.

    Since the transfer of biological to mechanical neurons will happen over a period of time, one can assume the existing neuron network will adapt with the new neurons and any ‘errors’ introduced will be resolved once the neuron begins firing and communicating. This is because if the neuron’s response to the graded inputs is not accurately tuned, it can easily tune itself through the natural processes of LTP. But I would rather believe that if we had nano-bots with the capability of replicating the processes of the neuron it would be simple enough for the nano-bot to observe the behavior of the neuron due to it’s presynaptic inputs, and mimic the essentially boolean reaction with enough accuracy to maintain the polychronous subnetworks which are likely the basis for the neuronal networks extremely high memory and learning capacity.

    You seem to have replaced one person’s pie in the sky optimism with your own pie in the sky pessimism. Both lack any basis in science or research. However until you have some underlying understanding of how the brain works you cannot simply piss away the entirety of neuroscience research. Sure there is a great deal left to learn, but we have some fundamental understandings of the processes within the brain. And with a small understanding your arguments are not really valid.

  • Greg Fish

    “… are you the ‘exact’ same person that you were 1 second ago?”

    No, of course not. Personality changes over time and as we learn new things, or get exposed to new ideas, we change our perceptions and outlook. This is why arguing that we can just replace the whole brain and come out with the “same exact” person is not a valid statement to make.

    “You seem to have replaced one person’s pie in the sky optimism with your own pie in the sky pessimism.”

    How so? Did I say that somehow enhancing the brain with cybernetic implants would be totally useless and impossible by any means? Not at all. I simply made a note that if you haven’t done the scientific research, you don’t get to claim what will work and how on the basis of your imagination, defending the idea of cyber neurons in the comments. Why argue with something I never wrote in the first place?

    Now on my side, I have to take issue with something you said…

    “… exploration of the frontier of knowledge requires sacrifice and I can assure you there are those that will gladly make that sacrifice.”

    As a tech designer, I would not let someone sacrifice their life to QC my software. For me to kill another human being with a mistake solely to see if I had it right is simply too great of a risk. It doesn’t matter if they want to give their lives willingly for the sake of progress. I simply refuse to kill for a breakthrough and would much rather find a far less dangerous way to test my designs.

  • http://www.acceleratingfuture.com/michael/blog/ Michael Anissimov

    I’m a Singularitarian, and I just wanted to say that I welcome criticism of the concept of mind uploading. I think you make many good points in the post above. I also wanted to say that I didn’t think there was anything wrong with your argument with Michael. I don’t see why real Singularitarians would object to thoughtful criticism of mind uploading.

  • http://www.enterthesingularity.com Jake Cannell

    gfish – I actually agree with your skepticism about the technological feasibility of uploading through nanobots – or more generally advanced nanobots of that level of capability being developed anytime ‘soon’. However, I do think the technological Singularity is a real possibility by mid-century, and this completely overturns previous conceptions of the timeline of future technologies, history in general, and perhaps even our notion of time itself. Anything that is possible will probably be discovered and developed eventually, and if we have a Singularity then the entirety of ‘eventually’ is compressed into ‘now’. So in this light there are technologies developed between now and the Singularity, and then there is everything else. I suspect really advanced nanobots come later – that technology seems far behind semiconductor tech.

    The more likely route to uploading is probably destructive brain scanning and imaging, which requires improvements that are more incremental and economical than fundamental – ie we can already do it in some smaller scale form today.

    The gradual nanobot replacement scenario is more philosophically pleasing, but it is essentially equivalent in end result.

    My simplistic argument was showing that *if* physics permits this gradual neural replacement uploading – such that it preserves full functional brain equivalence, ie there is no essential change to the person’s thinking – then we must accept that physics trumps philosophy and that if this is possible than there is no pure *philosophical* argument against uploading.

    A philosopher might say that this can be reduced down to a tautological acceptance of functionalism, but so be it – thats not worth arguing about. Uploading seems to be possible from physics, so its an engineering issue. But many people seem to have some lingering suspicion that it isn’t even possible for some deeper philosophical reasons.

    This of course just all comes back to the issue of AI in general. Physically we have the brain as an example, we know these particular information processing systems can learn language and develop into minds, and the challenge is largely one of reverse engineering the brain. Philosophical doubts are reasonable only before this technological hurdle is reached. Afterwards people will just accept that machines can be conscious for the same simple functional reasons that they accept other human brains are.

    In the same fashion, people will eventually accept that uploading is possible – that it really will be a continuation of your individual conscious experience – once they meet and talk to successful uploads and identify them as the same people they knew in the flesh – a personal turing test if you will. Perhaps some sects of people will insist that these uploads are not the same individuals if they don’t have the same physical neurons – that they are some sort of philosophical zombie – but I suspect this will be a minority position, like the belief that only certain people truly have souls.

  • Hrkis

    The only similarity between identical twins is their DNA (which still isn’t entirely identical. Its like 99.999% identical due to mutations) Everything else other than the code for protein synthesis is environmental. DNA (at least to some point) doesn’t determine how the neurons get laid out in your brain, or what synapses you have. That is mostly your environment. One easy way to understand this is if you have identical twins, both born with light skin. One stays inside his whole life and barely gets any sunlight. The other goes tanning on the beach every day. Even though they have the sane dna one will be tan and the other wont. The same is true with the neurons in your brain. If you could replace the biological neurons in your brain with nanobots that sent the same signals to the same nanobots as your brain would do with neurons, you would essential have the same person. Now the hard part is actually being able to do that as each neuron has the processing power equivalent to that of the average laptop.