[ weird things ] | when the singularity feels like the matrix…

when the singularity feels like the matrix…

The idea that simulating a brain in a computer will yield an intelligent being has become an unshakeable belief amongst Singularitarians.
matrix pods still

Philosophy and technology is a dangerous mix which tends to lead us to science fiction ideas about thinking, feeling robots, countless cyborgs living well into their second and third centuries, and concepts which seem so utterly impractical to someone who actually works with technology, but are nevertheless firmly defended by a number of writer, bloggers and entrepreneurs. After expressing my doubts that simulating a human brain in supercomputers would yield artificial intelligence, I was swiftly taken to task for it by Michael Anissimov and well as a reader on this blog. Apparently, by objecting to the idea that we could create consciousness in a computer, I’ve abandoned the scientific method and AI isn’t just plausible, it’s inevitable when we simulate the human brain. Let’s hope it likes us because there’s nothing worse than a ticked off robot with a massive IQ…

But in all seriousness now, my problem has nothing to do with philosophy or the conceptual idea that with an unlimited budget and timeline, we could achieve a perfect simulation of the human brain. The big problem is going to be growing an intelligence out of it because the genes for our brains have been evolving for just over three billion years and during embryonic development there’s a whole lot of wiring going on as our brains are shaped and molded for life. Add decades of learning and formal education, and the organized chaos going on in our heads thanks to its impressive plasticity, and even with a perfect emulation of every biological process we know is happening in our minds, we’re looking at minimum simulation runtimes of 18 to 20 years plus all the formative work over the nine month gestational period.

As a systems analyst, I’d have to replace my unhinged jaw if anyone seriously came to me with a proposal for a project like that. It would cost billions of dollars, require an immense supercomputer and a set of functional requirements that cover every major neurology project. This is why in my aforementioned post I called such a simulation an act of engineering decadence. Because it really is. A more practical version of the project (using the term rather loosely) would build a vast snapshot of every neuron and synapse in the human brain, freezing it in that single, static state to run a firing cycle. Without a dynamic environment in the brain, all we would get is a detailed visualization of how signals move through the mind for a short burst. Sapient thought wouldn’t have a chance to appear and there would be no memories or constant exposure to stimuli to produce a personality since it’s a combination of our wiring, education and experiences by which we would immediately know if we have a real ghost in the machine.

From an idealistic standpoint, I see another set of problems with trying to build artificial intelligence. Suppose we said damn the money, effort and time and somehow succeeded. We now have a sapient thing living in our server. Should the computer crash or power down, we commit homicide. Should we give it any power after it’s been endowed with an unpredictable human-like mind there’s no telling what it might do. If we let it roam the web, we could be dealing with a scenario straight out of a dystopian sci-fi flick. Give it a body and we might be just begging for trouble. And the big ideas of carrying out experiments on this AI system? No way. Without it’s explicit consent it would be unethical and dangerous, especially if it rebels against the mad scientists who let it loose on the world. Before we cheer on AI development, we should think really hard about what success will entail and consider the enormous costs and work involved in playing with fire in the end.

# tech // artificial intelligence / computer models / technological singularity


  Show Comments