[ weird things ] | mining biology for a communication model

mining biology for a communication model

How accurate are the comparisons between how our brains work and how computers calculate? A trio of researchers decided to find out.
vector brain cut in

Do biological cells compute? It’s a tricky question. From a purely technical standpoint, anything that takes an input of any kind and does something with it to produce an output is performing computation. We could argue if brain cells perform a fundamentally different computation than a CPU and depending on the level of abstraction we could make a point either way, but the math of the matter is that we can model any type of computation with a set of formulas and that’s exactly what a trio of Swiss computer scientists decided to do, thinking that if they’d start with a plausible computational model for cells, they could adapt it to low powered sensor networks. It’s a seemingly out there concept, but one that has a lot of practical real world applications, such as making faster, more reliable wireless networks for mobile computers and smartphones. Why start with cells? Well, cells are not exactly high powered machines but they can come together to properly conduct signals and solve what a computer scientist would effectively classify as a problem very efficiently, hence they make a good model.

And what happens when you take this model and formalize all its processes? An artificial neural network will emerge. I’ve mentioned artificial neural networks many times because they’re such a useful pattern in AI, but here, we see a very different application of them, one that seems coincidental rather than deliberate, and that makes sense because after all, the researchers are starting with modeling observable interactions between living cells. After a formalization of the different conditions for their new little computing devices what emerges is a formula of how a signal uniformly propagates across the network until a node’s ability to count has been maxed out and it’s only useful to conduct the signal onwards. The basic formula splays itself out like this…

turing formula

Yes, a lot going on here but at the same time it’s well captured by the formalism of a Turning machine. In fact this is about as close as we can get to a simple, straightforward biological equivalent of a UTM. Each part of the formula represents either a node state, a set of possible values, or a value selected based on a signal or a selection from a set, boundaries by those selections defined by other formulas. Since this isn’t physics, it’s rather difficult to simply splay out the formula and work out what value stands for what phenomenon, so if you would like more detail, I suggest checking out the paper itself, but for the purposes of a blog post, it’s enough to say that it specifies the constraints for signal propagation over a somewhat simple network made up of tiny machines able to perform only the most rudimentary computation. When we consider the limits on how much each node can hold in the model, we get something akin to an artificial neuron’s threshold value, but unlike a traditional ANN, this one doesn’t weigh the inputs from the nodes, it simply conducts them onwards while the nodes deal with a few very simple operations. And there’s another major difference that needs to be noted.

Since the researchers are going after biologically inspired networks, they assume that the network is not your classic feedforward one, where signals follow a set path from input to output at the same time. Their model is able to send signals asynchronously and accounts for signals of different strength spreading across nodes, which will perform their computations based on whatever signal will reach them first or tune out the rest. They will never know the source of the signal, they’ll just know what to do with it. In short, what they’re doing is trying to add a mechanism to deal with noise and interference, much like the background buzz of our own brain. It’s not necessarily a redefinition of the ANN pattern in the making as far as I can tell, but considering inputs as a competition between incoming information could trigger new ways of training ANNs for machines which more complex sensors which could introduce competing information and asynchronous processing into the mix. It isn’t necessarily what the researchers intended but I think it could certainly be used this way and it wouldn’t be a big surprise if future work in this realm tries to move forward in that direction. But the most immediate uses for this would certainly be in communications for commercial, military and medical purposes.

See: Emek, Y., Smula, J., Wattenhofer, R. (2012). Stone Age Distributed Computing arXiv: 1202.1186v1

# tech // algorithms / computer science / computing / turing machine


  Show Comments