why you need to pick the right experts, redux
When string theorist and popular science presenter Michio Kaku gave biology majors and those familiar with the basics of evolution and natural selection a mild case of facepalm by asserting that human evolution has stopped despite evidence to the contrary, I warned you that he was also not well versed in computing. Now, far be it from me to think that the editors at Big Think not only read this blog but took that off the cuff remark as a challenge, but this week, they gave Kaku an absolutely absurd question about quantum computing, then let him ramble about the collapse of the world’s economy when we reach the end of what we can do with our silicon processors, as well as how quantum computing will yield machines smarter than humans.
To be fair, the rambling itself was actually somewhat necessary because the question picked for him was inane at best and could best be answered by a single yes, followed with optional, but fully justified swearing about how the person asking this question could work with a computer every day of his life and not even have a clue of how the device works on its most basic and abstract levels. But that hardly fills airtime and it’s considered “mean” in the professional Q&A realm so Kaku had to talk about something. It’s just too bad he was also clueless.
All right, let’s dig out of this mess of technobabble one idea at a time, starting with Moore’s Law and what will happen when it inevitably grinds to a halt. Yes, despite what Kurzweil and his disciples will tell you, Moore’s observation, which was a marketing gimmick for Intel and became an artificial standard by which all chip and hardware makers judged their efforts, only holds true for so long. Once processors become small enough, it becomes impossible to keep the flow of electrons steady and the silicon starts to overheat. Basically there’s an upper speed limit for the kinds of electronics we typically use today and you’re actually witnessing the end of ever-accelerating processors. The last computer you bought was probably not a whole lot faster than what you had before, it just had more processors per core, or multiple processors.
And those multiple processors are really important here in explaining why the end of Moore’s Law doesn’t herald a global recession because even after we get to the possible upper bound of around 4 GHz per processor, we can always use all sorts of little tricks to either boost computing power or get more out of it. You could make dual quad core processors the norm much like quad cores are now a typical laptop configuration. With certain RAID configurations and a solid state drive you could make your computer read and write data to memory much faster. Even if you won’t buy a faster processor, you’d be able to get more of them and in configurations that could truly take advantage of your computing grunt. And I haven’t even mentioned multiple superscalar processor setups yet.
But here’s the catch. All this computing power is more marketing hype than what you will really use. When I’m coding away, running several instances of Visual Studio 2010 and SQL clients (yes, I admit, I’m a .NET guy), my computer is using maybe 60% of its processing power and less than half of its memory running all these resource-hungry apps. Most people don’t strain their computers to anywhere near that level because they use their browsers, some text and spreadsheet editors, and that’s really about it. The only users who need major processing oomph are dedicated gamers and graphic artists working with visual effects and video, since the calculations for rendering the graphics involved can be immense. Of course for people like that, we now have sever farms which render concurrently on multiple supercharged processors and distributed networks which can spread the processing load among as many servers as you can hook up to them.
As for the typical home users, the only speed that really counts nowadays is how fast you can upload and download your files from a web server, and with the explosion in cloud and mobile computing, those will be the only speeds that matter within the next five to ten years. So when Kaku predicts that the entire computing industry will slowly grind to a halt and plunge the world into a recession within ten years, when the fastest possible silicon chip rolls off the production line, I’m left wondering if he knows anything at all about computers or information technology and concluding that he must lack even a cursory awareness of today’s tech trends to make such forecasts.
So now we finally make our way to his offhand predictions about what quantum computers could do for us in the future, getting the mechanics of how they could work correctly since he is a physicist after all, and astutely noting that molecular computers are more likely to get here first because they don’t suffer from the same sort of quantum instabilities we see in quantum computing experiments. But like the typical tech evangelist, he’s quick to promise us how quantum computers will unlock the keys of AI and superhuman machines because they’ll do their quantum magic and make the entire universe suddenly computable. They won’t. That’s not why they’re being considered and that’s not what they’ll be doing once they’re up and running.
Quantum computing is meant to tackle problems which are either immensely difficult to solve with today’s machines, or just could not be solved in a practical amount of time. Basically, in computer science, efficient solutions are ones which can compute an answer to a question in polynomial time. If you have an algorithm which hits exponential time complexity or beyond, it could take far too long to solve this problem in practice. Quantum computers provide some interesting alternatives for very complex and elaborate algorithms and by taking advantage of quantum effects you can reduce time complexity or find answers to more elaborate problems, as well as advance our ciphers for secure data access. They would do a lot of very cool things, but they’re not magic and they couldn’t help us if we don’t yet understand a problem well enough to break it down into an algorithm.