[ weird things ] | why you’re probably intuitively wrong

why you’re probably intuitively wrong

Some engineers are convinced that neurons work the same way as circuits. Neuroscience begs to differ.
neuron render

Back in May, I wrote a post about the myriad of problems with the concept of mind uploading and contrasting the differences between human brains and computers. As you can probably imagine, Singularitarians weren’t very happy with the argument and the president of the Singularity Institute, Michael Vassar, tried to prove me wrong during one of our public debates. And recently, Singularity blogger and developer Jake Cannell threw in his thoughts about mind uploading with a thought experiment he says proves that mind uploading really is possible and your brain could be either turned into a machine, or transferred over to one in the eventual future.

Believe it or not, I’m still not swayed, especially when considering that his thought experiment involves some pretty radical assumptions that would keep most designers and engineers up at night all by themselves. We’ll start with his premise and work our way down to the details, so settle in for a little computer nerd fight…

The gradual neuron replacement scenario is a thought experiment which intuitively shows why the functionalist-materialistic view of mind is correct.

The scenario in question is of course the idea of replacing every neuron in one’s brain with a nanobot which is designed to function just like normal, everyday, natural neurons. Aside from the fact that this would require an incredible level of understanding of how our brain works and what every single neuron in it does, can anyone spot the problem with this statement? That’s right, it’s the intuitive part. Your intuition is a terrible judge of how correct a certain concept is. In fact, many things in science shown to be correct over centuries of experiments, observations and data are actually quite counterintuitive. Do you feel like you’re on a sphere spinning around its axis at over a thousand miles per hour? Does your intuition tell you that you’re hurdling through a mostly empty, vast cosmos? Probably not. Which is why you shouldn’t rely on it in science or technology.

If you really truly believe mind uploading is not possible…

Stop right there. I don’t “believe” that it’s not possible but instead, I’m well aware of the challenges involved as well as where our current technological achievements stand. This is not something I just randomly woke up and decided, but deduced from my experience and education.

The nanotech neurons are functionally equivalent. They [will] connect to the same synapses of the original neuron, and they perform the same functional roles. Replace one neuron with a nanotech equivalent, and nothing changes — the rest of the brain doesn’t notice. Replaced one by one, your entire brain could be replaced with nanotech, but you would have the same information content, you would think the same thoughts, etc.

How do we know that? Has there ever been an experiment like this when we’ve been able to conclusively look and decide that replacing your entire brain with nanobots wouldn’t change a thing? Since we’re imagining a brand new type of technology, it’s pretty safe to say that no, there’s hasn’t. And since we’re in imagination land, we can make our technology do anything we want and however we want it to. It’ll work perfectly in our mind but what happens when those designs are put on paper and turned into tangible machinery? So we can imagine all we want, but that’s not the challenge. The challenge is making this dream tech work in the real world.

Now imagine if these nanotech devices allowed you to interface with computing systems, transfer all of their internal state, and so on. Your body could die, but the nanotech devices could continue functioning, even be removed from the skull, reform your brain in some other device, and connect you to a virtual reality environment — ala the matrix.

Again, yes, that all sounds plausible. But this is sort of like saying that when we build warp drives we’ll be free to roam the universe. The technology doesn’t exist yet, building it is a problematic endeavor and even if we do build it, there are major operational challenges to overcome and the whole thing could collapse into a black hole on us when there’s a snag. We’re still in the dream world, not the reality we inhabit.

If [materialistic] functionalism is wrong, one of the following must be true: 1. at some point during the neuron replacement you lose [your conscious mind] (as stated this is impossible because the successful neuron replacement wouldn’t change brain function — so for this to be wrong you must choose a worthless definition of consciousness)

So wait, it’s impossible for the designers to make a mistake, or for defective cyber neurons to wreak chaos on the organic parts of your brain? It’s not like bugs or design flaws never happen. If they didn’t, IT would be a far, far smaller field that it is today. The imminent success of the replacement is hinged on our dream world of perfect technology and the argument is doubly rigged by telling us that any problems would be caused by new and “worthless” definitions of consciousness. So I’m talking about bugs and gaps in the required knowledge to make this all work and Mr. Cannell is wafting in the clouds of theoretical views of consciousness. But such is almost always the case with Singularitarians.

2. you become a different, new conscious person (again impossible unless you use a worthless definition of consciousness and identity)

And yet again, for him to be wrong, we need to choose some stupid, worthless, meaningless way to define an amorphous and hotly debated concept of consciousness. What about neuron replacement mistakes acting a lot like the kind of brain damage that changes the personality of some brain trauma victims? That’s not worth any merit from either a medical or technical standpoint?

3. the gradual neuron replacement is somehow not allowed by the laws of physics (not true from current theory). All 3 are untenable, and so functionalism is correct. Thus uploading is possible.

The last point is correct. We can replace neurons and there’s no law of physics or biology that will stop us. But the statement that the two events above are untenable verge on the ridiculous since the justification for them is a nonexistent technology we have to imagine, and imagine that it works perfectly. On top of that, we have to be completely resigned to the absolute statement that when we replace all the neurons in a human mind, we will get the same exact human mind at the end. In the real world, this simply doesn’t pass the smell test. Nothing here proves that mind uploading is physically possible and will really work when we try to do it via replacing all the neurons in our brain. Not only that, but this argument has the zeal of a religious proclamation in which faith in technology trumps the very real dangers and concerns of human error when putting it together and renders his main point virtually impossible to prove by an experiment.

# tech // nanotech / neurology / technological singularity


  Show Comments