For all their endurance and toughness, our vaunted Martian rovers suffer from a major handicap that makes a typical mission far less effective than we want it to be. In all their time on Mars, Spirit and Opportunity covered less than 20 miles combined. What’s the current record for the longest distance covered in one day? Several hundred meters. You can cover that in ten minutes at a leisurely pace. Granted, you’re on Earth and have two feet that were selected by evolution for optimal locomotion while the rovers are on Mars and have to be driven by remote control, with every rock, fissure, crevice, and sand trap in their way analyzed and accounted in prior to a move command being issued since getting a rover stuck hundreds of millions of miles away is a serious problem. But isn’t there anything we could do to make the robots smarter? Can we make them more proactive when they land so far away we can’t control them in real time? Well, we could make them smarter but that will cost you, both in expense and resources since they’ll have to think and keep on thinking while they work…
Technically, we could do what a lot of cyberneticists do and design artificial neural networks for our rovers and probes, treating the various sensors as input neurons and the motors as output neurons. We simulate all the environments virtually and train them using backpropagation. Then, when encountering certain combinations of sensory readings, these artificial neurons transmit the signals to the motors and the machine does what it should do in that situation. If we can interrupt ongoing processes to monitor new stimuli, we could even allow them to cope with unexpected dangers. Let’s say we have a work mode and an alert mode. The work mode is endowed with the ability to pursue objects of interest, the alert mode looks out for stimuli indicating that there may be something harmful coming. So when the work mode finds a rock to drill, another simultaneous thread opens and the alert mode starts scanning the environment. Should the tire slip or the wind pick up, the alerts go out to the rover to stop and reevaluate its options. Sounds doable, right? And it is. But unfortunately, there’s a catch and that catch is the energy that will be required to run all this processing and manifest its results.
Brainpower is expensive from an energy standpoint. There’s a reason why our brain eats up a fifth of our total energy budget; its processes are very intensive and they continue non-stop. Any intelligent machine will have to deal with a very similar trade-off and allocate enough memory and energy to interact with its environment in the absence of human instruction. That means either less energy for everything else, or that the rover will now have to come with a bigger energy source. The aforementioned MER rovers generated only 140W at the peak of their operational capacity to power hardware using 20 MHz CPU and 128 MB of RAM. With this puny energy budget, forget about running anything that takes a little processing oomph or supports multithreading. With a no-frills operating system and a lot of very creative programming, one could imagine running a robust artificial neural network on devices comparable to early-generation smartphones, something with a 200 MHz CPU and somewhere around 256 MB of RAM. To run something like that nonstop can easily soak up a lot of the energy generated by a Mars rover, and when you’re on the same energy budget as a household light bulb, this kind of constant, ongoing, intensive power consumption quickly becomes a very, very big deal.
Hold on though, you might object, why do we need a beefier CPU? Can’t we just link multiple small ones for a boost in processing capacity? Or, come to think of it, why bother with processing capacity at all? Well, since a rover has certain calculations and checks it constantly needs to make, you need to provide time for them to do what they need to do. Likewise, you need to keep processing data from your sensors to feed the neural net in the background and handle the actual calculations from it. Detecting threats in real time with what would be a state of the art system in the 1980s seems like a tall order, especially if you expect your rover to actually react to them rather than plow onwards as the alarms go off in its robotic head, resigned to its fate, whatever it may be. On top of that, just trying to run something like an artificial neural network while performing other functions requires an overhead to keep the computations separate, much less actually having the neural net command the rest of the rover. Of course there could be something I’m missing here and there’s a way to run an artificial neural network with such a light footprint that it could be maintained on a much leaner system than I outlined, but it seems very unlikely that if bare bones systems like those used for today’s rovers could be made to run a complex cognitive routine and act on its decisions, someone wouldn’t already be doing just that.