[ weird things ] | michael vassar vs. weird things, round two

michael vassar vs. weird things, round two

Michael Vassar is back for the second round of our debate about Ray Kurzweil and The Technological Singularity.
sci-fi ai

Here it is, the second episode of my exchange with Michael Vassar, the president of the Singularity Institute. If you caught the first edition, get ready to switch gears a bit and dive into theoretical computer science as we explore the ideas of mind uploading, simulating human brains with supercomputers and what that may mean for scientists while encountering a scenario that could make Descartes shudder in terror…

Please Leave Your Brain Where It Is” expresses a fairly detailed argument that brains are unlike computers. This is obviously true. The brain doesn’t resemble a computer very closely, certainly not closely enough for such a resemblance to convince us that computers could think. Turing said so quite clearly in his seminal article “Computing Machinery and Intelligence” and I addressed precisely this misconception in my recent article in Forbes online.

The post itself dealt with Ray Kurzweil’s idea of uploading human minds to a computer by mid-century which he expressed in The Singularity Is Near, giving it a description very reminiscent of the classic anime Ghost In The Shell. In order to upload a brain anywhere, you need a system that actually works like a brain so whatever you upload will actually function. Since computers don’t match the criteria, an upload seems like an idea that’s completely unrealistic in implementation.

In paragraphs four and five of “Ray Kurzweil’s Digital Pipe Dreams” you seem to say that Ray is promoting some process of sucking information out of a person while discarding all the rest of what they are. I can’t really make any sense out of such a proposal, but I am confident no one I know of is making it.

So is Ray alone in making this claim? Here’s a Wikipedia’s cliff notes on his aforementioned book, for free, public reference. In the description of Ray’s vision for the 2030s, we find the mention of mind uploading, the exact idea that post and it’s follow up tackle. Let’s remember that Kurzweil’s goal is to cheat death with cutting edge technology so simply trying to copy his mind to a supercomputer wouldn’t get the job done. He’d need a full blown brain to machine transfer.

People do propose simulating brains, which will of course require gathering a lot of information from them. The first two talks at the 2009 Singularity Summit will discuss technical details relating to brain simulations, but the important claim is simply that the brain is a physical system and it is possible for a computer to simulate any physical system. If a physical brain is interested in steak or in sex, as in your examples, a simulation of that brain which produces brain-like behavior will also be interested in steak or sex, or at least will transform inputs into outputs as if it was.

Ah but it’s not that simple. You need stimuli and a way to virtually control those urges. In effect, you would be a puppet master running through the brain’s routine as understood by the developers who write the software by which it functions. But really, that’s beside the point when it comes to mind uploading because the way that a simulated brain will work, will be very different from the way a human brain does, as we both agree.

In general, we currently lack a robust theory of consciousness. Most Singularitarians do think that a simulation that behaves exactly like them must be conscious, but the truth […] of this claim doesn’t have any bearing on the practical impact of simulated humans.

Actually it does. If a simulated human brain is conscious, is aware and is capable of reasoning, anything you do to it must follow the same ethical guidelines as any other person. If you were to run a test on a conscious brain in a computer and your test causes a critical system crash, then you would have technically committed homicide. The laws and rights for human beings are based on the ability to reason and our sapience. If your creation has a consciousness and aware of the environment around it, it should have the same legal right as a human. The testing and experiments you go on to mention may be severely restricted by ethical guidelines and rightfully so. However, I don’t see any reason why a simulated brain would be capable of consciousness since it would simply visualize chemical and electrical signals in our brain by solving formulas.

And this is where I have to put up another objection when it comes to using this simulated brain in a medical experiment or for clinical research. The brain would be built by software designers and developers and thus, it will be based on their understanding of the brain and how it works. But that understanding might be wrong so a doctor trying to figure out something about the brain would of course be skeptical and would need to confirm whether the brain truly works the way it does in the simulation. Even if everything works fine, the fact that it’s a simulation and not an actual brain would restrict the applicability of the research, like a cosmological model has to be supported by astronomical observations before it becomes a full blown theory.

# tech // computer science / computers / technological singularity


  Show Comments