[ weird things ] | so just how intelligent can we make our space probes, revisited

so just how intelligent can we make our space probes, revisited

One of the biggest limitations on the computing power of our spacecraft is shielding them from radiation.

Once upon a time I wrote a post about the sacrifices in intelligence our rovers have to make to be able to travel to other worlds and why these sacrifices are necessary. Basically, we can build very smart bots here on Earth because we can give them a big energy supply for faster, more complex, and more energy demanding computation. On Mars, however, this big energy supply will be a big liability since it will have to take away from a rover’s ability to move or its overall mission time. I’m still pretty confident in my earlier assessment, but some stories spreading around pop sci blogs made me realize that there was a AI-hobbling factor forgotten that wasn’t addressed; cosmic rays. As rovers explore Mars, they’re bombarded with radiation that easily penetrates through the red planet’ thin atmosphere. To give Curiosity the best possible tools to explore the Martian surface, it was given a very powerful setup, at least by spacecraft standards.

BAE Systems’ RAD750 chips provide it with a blazing dual 200 MHz processors and 256 MB of DRAM as well as an entire 2 GB of flash memory. Again, this is blazing only in the world of space travel since these are pretty much the specs for a low end smartphone, and even that probably has a dual core 1 GHz CPU. But the low end smartphone probably can’t withstand a massive radioactive bombardment without going haywire. The problem is the DRAM, or the memory the computer uses to keep all the things it needs to run. When hit by cosmic rays, it goes through something called a bit flip. Ordinarily, for us, this is no big deal because the vast majority of the memory our devices use is taken up by some background process, usually one with enough temporary variables that can absorb the hit before being cleared out of a register in a matter of nanoseconds. This means we either don’t care, or don’t notice, and that’s just fine for those rare cases when a stray particle flips a bit or two. Hell, we lose entire packets when we send them around the internet with certain protocols and that’s a lot more than a bit, but life goes on.

For rovers on other worlds, this is a much, much bigger issue. Not only are the bit flips a lot more frequent since they’re being showered by energetic particles, there’s a lot less margin for error since their setups are a lot more lean. Were a particle case a most significant bit to flip while a small array of bytes is telling the rover how to move, the consequences could be disastrous. The value for 0x00 [00000000] could turn into 0x80 [10000000] and instead of telling the wheel motors to stop, the byte stream just gave it the command to apply 50% power to each wheel, driving it into a ditch, or right off a cliff. And this is why the RAD750 chip is made to only tolerate a single bit flip per year, about twice during the entire Curiosity mission. Were the scenario I just outlined happen, the chip would auto-correct the stream to keep 0x00 as it was when assigned. Rovers go on their merry way, JPL is not living in fear of cosmic rays giving Curiosity a mind of its own, and we get great high rez pictures from the surface of another planet. Win, win, win, right?

Yes, but the auto-correction and the radiation hardening necessitates some tradeoff. It makes the chip more expensive, or consume a little more power, or slows down the CPU cycles, all of which could be used to make rovers smarter and more autonomous. Though dumbing them down a little is a small sacrifice for making sure they’re a lot less likely to randomly drive off a cliff unless you have the budget to build a much bigger robot, launch it on a much more powerful rocket, and devise a way for it to land safely tens if not hundreds of millions of miles from home. Don’t get me wrong, Curiosity’s dual cores and an RTG will make it a lot smarter than previous rovers, but it’s hardly an E-Einstein and unless we find a way to double or triple the size of our Martian rovers, or create artificial magnetospheres for our spacecraft, it’s going to be fairly close to the peak of the kind of intelligence we can get in an interplanetary robot for the next decade or so. Actually, considering that just testing and certifying a new radiation-hardened chip can take that long, that may be an optimistic assessment.

And this is why ultimately, we have to go to other worlds ourselves if we want to do high impact science quickly and efficiently. Robots are safer, they’re cheaper, and they don’t want hazard pay, true. But ultimately, humans are going to be much better explorers than the rovers and probes they send. Not only do they have the necessary brainpower to deal with challenging alien environments without a 34 minute delay between actions, they also have the will and interest to try new things and fit in an experiment or two that can’t be crammed into a rover’s schedule but can teach us something new and exciting as well. And this is not to mention the medical benefits we’d reap from getting humans ready to walk on other worlds and the possible wonders it could do for surgeries, physical therapy, and regenerative treatments as all these technologies and ideas are forces to come together, compete, and produce a roadmap that can be empirically tested and proven by a real mission…

# tech // computers / computing / curiosity

  Show Comments