[ weird things ] | do memristor chips remember electric sheep?

do memristor chips remember electric sheep?

A new set of chips shows a lot of promise in drastically reducing the vast amount of energy used by artificial neural networks.
crt head

Humans beware. Our would-be cybernetic overlords made a leap towards hyper-intelligence in the last few months as artificial neural networks can now be trained on specialized chips which use memristors, an electrical component that can remember the flow of electricity through it to help manage the amount of current required in a circuit. Using these specialized chips, robots, supercomputers, and sensors could solve complex real world problems faster, easier, and with far less energy. Or at least this is how I’m pretty sure a lot of devoted Singularitarians are taking the news that a team of researchers created a proof of concept chip able to house and train an artificial neural network with aluminium dioxide and titanium dioxide electrodes. Currently, it’s a fairly basic 12 by 12 grid of “synapses”, but there’s no reason why it couldn’t be scaled up into chips carrying billions of these artificial synapses that sip about the same amount of power as a cell phone imparts on your skin. Surely, the AIs of Kurzwelian lore can’t be far off, right?

By itself, the design in question is a long-proposed solution to the problem of how to scale a big artificial neural network when relying on the cloud isn’t an option. Surely if you use Chrome, you right clicked on an image and tried to have the search engine find it on the web and suggesting similar ones. This is powered by an ANN which basically carves up the image you send to it into hundreds or thousand of pieces, each of which is analyzed for information that will help it find a match or something in the same color palette, and hopefully, the same subject matter. It’s not perfect, but when you’re aware its limitations and use it accordingly, it can be quite handy. The problem is that to do its job, it requires a lot of neurons and synapses, and running them is very expensive from both a computational and a fiscal viewpoint. It has to take up server resources which don’t come cheap, even for a corporate Goliath like Google. A big part of the reason why is the lack of specialization for the servers which could just as easily execute other software.

Virtually every computer used today is based on what’s known as von Neumann architecture, a revolutionary idea back when it was proposed despite seeming obvious to us now. Instead of a specialized wiring diagram dictating how computers would run programs, von Neumann wanted programmers to just write instructions and have a machine smart enough to execute them with zero changes in their hardware. If you asked your computer whether it was running some office software, a game, or a web browser, it couldn’t tell you. To it, every program is a set of specific instructions pushed onto a stack on each CPU core, read and completed one by one, and then popped to make room for the next order. All of these instructions boil down to where to move a byte or series of bytes in memory and to what their values should be set. It’s perfect for when a computer could run anything and everything, and you’ll either have no control over what it runs, or want it to be able to run whatever software you throw its way.

In computer science, this ability to hide nitty-gritty details of how a complex process on which a piece of functionality relies actually works, is called an abstraction. Abstractions are great, I use them every day to design database schemas and write code. But they come at a cost. Making something more abstract means you incur an overhead. In virtual space, that means more time for something to execute, and in physical space that means more electricity, more heat, and in the case of cloud based software, more money. Here’s where the memristor chip for ANNs has its time to shine. Knowing that certain computing systems like routers and robots could need to run a specialized process again and again, they’ve designed a purpose built piece of hardware which does away with abstractions, reducing overhead, and allowing them to train and run their neural nets with just a little bit of strategically directed electricity.

Sure, that’s neat, it’s also what an FPGA, or a Field Programmable Gate Array can do already. But unlike these memristor chips, FPGAs can’t be easily retrained to run neural nets with a little reverse current and a new training session, they need to be re-configured, and they can’t use less power by “remembering” the current. This is what makes this experiment so noteworthy. It created a proof of concept for a much more efficient FPGA when techies are looking for a new way to speed up resource-hungry algorithms that require probabilistic approaches. And this is also why these memristor chips won’t change computing as we know it. They’re meant for very specific problems as add-ons to existing software and hardware, much like GPUs are used for intensive parallelization while CPUs handle day to day applications without one substituting for another. The von Neumann model is just too useful and it’s not going anywhere soon.

While many an amateur tech pundit will regale you with a vision of super-AIs built with this new technology taking over the world, or becoming your sapient 24/7 butler, the reality is that you’ll never be able to build a truly useful computer out of nothing but ANNs. You will lose the flexible nature of modern computing and the ability to just run an app without worrying about training a machine how to use it. These chips are very promising and there’s a lot of demand for them to hit the market sooner than later, but they’ll just be another tool to make technology a little more awesome, secure, and reliable for you, the end user. Just like quantum computing, they’re one means to tackling the growing list of demands for our connected world without making you wait for days, if not months, for a program to finish running and a request to complete. But the fact that they’re not going to become the building blocks of an Asimovian positronic brain does not make them any less cool in this humble techie’s professional opinion.

See: Prezioso, M., et. al. (2015). Training and operation of an integrated neuromorphic network based on metal-oxide memristors Nature, 521 (7550), 61–64 DOI: 10.1038/nature14441

# tech // ann / computer science / computers / computing


  Show Comments