[ weird things ] | why computers of the future may have to sleep

why computers of the future may have to sleep

Turns out artificial intelligence needs a nap every once in a while to stay accurate, and that may tell us something fundamental about our own minds.
assembling robot

We’re constantly told that unlike humans, machines don’t need rest. They’re relentless and will remain on task long after even the most impressive human mind or body falters from fatigue. If we overwork, we get sick and need a break while a machine just needs a little maintenance or a software update once in a blue moon, if it’s built and programmed well. But researchers trying to prefect a type of artificial intelligence meant to allow computers process real world sensory data in real time found a flaw in this assumption. After being put through their paces for a little too long, the neural networks in their experiments became unstable and the only way to get them back to normal was by making them sleep.

Here’s what happened. The neural networks in question are known as Spiking Neural Networks, or SNNs, and are designed to more accurately mimic the behavior of biological neurons than a traditional AI. Normally, neurons in artificial neural networks will receive a series of inputs and weights associated with them. If the weighted average of those inputs exceeds a threshold, the neuron “fires” and passes on the data to the next layer of the network or indicates an output. So far so easy, right? Unfortunately, this approach alone is insufficient when dealing with subtle and complex real-world data, so neural networks designed to parse images and sounds often feature specialized layers to further filter and process inputs.

Enter the SNN. Since biology has perfected capturing and understanding visual and auditory data, computer scientists have been eager to borrow from it, which led to a class of AI known as neuromorphic neural networks. SNNs are one such model and their claim to fame is that instead of their neurons just treating the sum of their inputs as weighted averages, they use a calculus equation to replicate a biological neuron’s behavior, specifically its action potential. It’s a powerful technique, but researchers found training SNNs to be a bit of a mess. As they try to absorb massive libraries of information during training, they start to “hallucinate,” unable to tell noise from signal in their training datasets.

A team led by computer scientist Yijing Watkins encountered this problem when trying to work with a chip meant for biometric identification devices and decided to let the SNNs do what we do when we’re tired. Rather than feed them more training data, the team exposed the SNNs to the computer version of the signals our brains get when we enter deep, restorative sleep. In other words, quips about AI taking a nap weren’t poetic license. That’s literally what happened. After listening to some static, the networks returned to training refreshed, with far more stable and predictable results. Amazingly, this is all very similar to what happens in our brains when we get tired and after we rest.

In fact, sleep appears crucial for living things in general. Failing to get deep, restful deep sleep has been linked to neuro-degenerative diseases like Alzheimer’s and we absolutely need sleep to form long term memories and learn new skills. This experiment with SNNs inadvertently mimicked what we think happens to our minds during sleep deprivation, which hints that we’re getting a little closer to building artificial brains that are more like ours and learning more about our own cognition in the process. In the long term, this opens up more avenues for integrating AI with our own minds and bodies, and maybe even pushing that envelope further into the realm of what was science fiction only a few years ago.

Today, when we talk about AI and automation, it’s usually framed in the language of jobs and economics in which humans are pitted against machines by nefarious misanthropic billionaires. But that’s not why we’re developing these systems or why we thought of them in the first place. They were supposed to be an extension of our intelligence, creativity, and adaptability, and a means to learn more about ourselves in the process. Maybe at some point in the future, when we realize that our systems are supposed to serve us, not the other way around, we can talk to the machines embedded in our brains and bodies, ask them whether they dream of electric sheep and see if they can appreciate the joke.

# tech // artificial intelligence / neurology / sleep


  Show Comments