looking for a ghost in the machine, redux

Philosopher Nick Bostrom is still barking up the same, fundamentally wrong tree of emergent super-intelligence.
fading away

Sometimes I envy philosophers. In what other discipline can one write extensive papers based on a random idea, just running with it for pages on end to see how far it will go? Whenever I have to describe an algorithm, there’s always someone nagging about showing time complexity, in Θ(g(x)) if possible, with a proof attached, especially when the function is recursive and you need to solve a recurrence relation. Yeah, the mathematical analysis of algorithms is really not my favorite subject area and I’m thrilled to hand it off to the mathematicians whenever possible. All most coders need to know is not to put a lot of nested loops into their algorithms and they’ll run just fine. But technophile philosopher Nick Bostrom, whose work we’ve encountered before, and mentioned many times since then, can look into the future of computing and come up with the argument that all of us could well be living in a giant computer simulation being ran by an incredibly advanced civilization, no tedious mathematical proofs attached. His paper is hardly what we’d call evidence-based, but It sounds very impressive and pop scientific enough to be enthralling, which is why so many news outlets let fly when it first came out, and is now casually mentioned in quite a few philosophical discussions about our existence.

Now, to be fair, his argument was misrepresented to say that he was advocating that we really do live in such an odd, virtual environment like the protagonists of The Matrix when he actually said no such thing. His point was merely that it’s possible that if there’s a civilization which lives long enough and has enough computers, as well as enough motivation to build and run this simulation, it would run it. And if such a simulation were to exist, it would create so many conscious entities, that it would be more statistically likely that you are going to be a simulated life form rather than a real one, and that what we see is the universe is actually inside another universe being ran as a complex, digital version of the cosmos. I know, it’s like The Matrix meets Inception but with only the former being out in theaters at the time Bostrom was writing his paper. And he does use math to estimate about how much computing power would be needed to simulate consciousness in order to give his argument some weight and show that we’re close to meeting the required processing speed, and have ideas how to far exceed the kind of computational capability we’d need to eventually run the simulation in question, provided we’re around long enough to master nanoscale computing into which we’re just now taking our very first, tentative steps by playing around with materials chilled to within a smidgeon of absolute zero.

So how does Bostom’s math measure up and how accurate are his predictions for hypercomputers of the far future? Well, the nice thing about trying to estimate where computing will eventually go is the fact that the laws of physics govern the ultimate speed limits for information processing. One study on the subject set an upper bound of future quantum computing systems at 100 exaflops, which is about 50,000 times faster than what we do today with our fastest supercomputers. However, this estimate is for a perfect, flawless machine which only exists on paper as a set of equations. In the real world, it would have to be throttled down to keep the flow of photons nice and steady. Too much overhead noise and the quantum states used to speed up a traditional computation begin to collapse and fall apart, resulting in a cascade that effectively crashes the machine mid- calculation. More realistically, we’d be talking about a 10 exaflop machine, plenty to meet what Bostrom cites as the benchmark required to simulate a conscious human entity: about 100 petaflops. Whether we have this much processing power available on one chip or not could be a post in its own right, but suffice it to say that it’s very doubtful that the final simulation machine would be something dainty and contain just a few CPUs. It will more likely be a monstrous server farm, drawing immense amounts of energy and requiring to be cooled to virtually absolute zero to function but it doesn’t seem outwardly implausible to eventually build.

All right, so far, so good. But aren’t we forgetting something kind of important? We know that it might very well be possible for a computer of the future to process about as many instructions per second as our brains, but does that actually add up to anything? Bostrom’s baseline assumption is that consciousness is just a form of computing, and since computation is really just an abstract concept rather actually tied down to any particular process within a programmable machine, it’s substrate independent. In other words, replicate the very same neural communications in your brain within a cluster of CPUs, and the machine will become conscious, self-aware, and start thinking like a human. This is why he focuses so much on making sure that computers can match our brains in terms of processing speed. However, we really don’t know if consciousness would really be substrate independent. Keep in mind that our neurons don’t behave like logic gates and much of the buzz in our heads is just that, background buzz. In reality, we would map all the brain connections, model the exact information exchanged between them, load all those petabytes into a massive hypercomputer which can run through 100 quadrillion instructions in a second, hit the on button, and be rewarded with a stream of readouts showing how our virtual model is running and nothing more. It would be a major accomplishment but it won’t awaken whatever ghosts Bostrom thinks reside in these machines. There’s probably no ghost to awaken.

See: Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly, 53 (211), 243–255

# tech // computer science / futurism / philosophy / singularitarians


  Show Comments