[ weird things ] | is the future of computing digital or analog?

is the future of computing digital or analog?

We've reached a phase in computer development when everything old is new again. And this approach can work. Up to a point.
3d cpu

Considering that virtually every electronic device you use on a daily basis is either digital or has some sort of a digital component, you’d think that this is a ridiculous question. Obviously, digital is better and the debate has been settled since the 1960s, otherwise we wouldn’t be using digital computing right now, right? After all, digital processing is more reliable and precise since you don’t have to deal with background noise during your typical computations. You know what byte represents what memory address without a background hiss getting in the way every once in a while, so why would anyone be willing to just drop a byte or two here and there? Well, believe it or not, analog computing isn’t completely dead and while we’re so used to binary ones and zeroes, the truth is that they’re just an abstraction of an analog current running through circuits. The lowest few values of the current are assigned a zero, the highest a one, and the rest fall into a typically generous margin of useless noise with which the CPU doesn’t need to concern itself.

But while the CPU has loftier concerns than background noise, it’s actually a big issue in chip design because that binary flip relies on noise suppression so you can’t just use a tiny bit of power for it. If you do, you risk introducing so much noise that all the processes running on the chip very quickly get corrupted, or you’ll be locked into a chip design with much less precision than is needed today, something that just won’t cut it for processing modern graphics or complex data. Basically, since the 1960s we’ve been trading energy efficiency for computing heft, and while it’s been working fine for us so far, DARPA is wondering if it’s worth it to keep building giant server farms to handle the huge data loads we impose on our machines today, especially in military intelligence. Their solution? Turning to something known as PCMOS in comp sci circles; chips a lot like the ones we have now, but designed to tolerate a certain percentage of error by running at lower power and with baked in error correction.

Data does get lost when PCMOS is used for tasks like image and video and there are very obvious blurs and pixilation with an acceptable error rate beyond just 7% or so. The test images in the papers and around the web, showing an entity purported to be a human in a construction hat, though it would be entirely unsurprising if it asked to be taken to our leader, if anything, show just how easy it is to seriously degrade performance by tolerating noise in CPUs. And for an entity like DARPA, blurry and patchy video and image streams are hardly useful, especially if they’re supposed to be used to mark targets for drones, intelligence gathering, and surveying potential combat zones. This is why the agency wants to take things further and merge PCMOS with something a lot like IBM’s artificial neural network chips, proposing a self-organizing analog CPU which learns how to build its own auto-correction right into the path the electrons are supposed to take. If some data gets lost or garbled, that’s fine. Think of it as a TCP/IP structure with hardware. IP constantly loses data packets or shifts them out of order, sacrificing reliability for raw speed. TCP straightens out the packets and recovers lost data, fixing what IP didn’t get quite right.

All right, so we could probably create a PCMOS chip with a hardwired ANN to better organize how it will handle certain types of data. But how much overhead will the ANN take? If it’s dynamic, it will need a good deal of power while it re-trains itself. You could make self-correcting PCMOS chips with a dedicated mission depending on their network’s training and save power this way, but will the results come as fast or be as usable? How will the system perform over the long haul? I can keep a typical computer powered on for more than a month with no adverse effects. Can I do the same with a PCMOS based machine? Would the errors accumulate faster than the garbage collector can deal with them, making it more prone to freezes and system crashes? Will we need a new operating system with a more powerful garbage collector? And perhaps DARPA may want to consider a recent study showing that in modern chips, the Casimir force seems to be the most important factor in how electrons move through the logic gates and that this force could actually be manipulated with how the gates are spaced and shaped. If we want to make our CPUs more efficient, we may want to rethink how to build existing logic gates without worrying about error correction and hardwired ANNs…

# tech // computer chips / computer science / computing


  Show Comments