[ weird things ] | yes, you there, please step away from the theoretical computer science…

yes, you there, please step away from the theoretical computer science…

Military blogger invokes theoretical computer science in a post about testing code for fighter jets and gets the fundamentals wrong.
futuristic fighter jet

Danger Room’s editor at large Noah Shachtman generally writes interesting pieces that aren’t a chore to follow, but when taking up the problem with securing unmanned drones while more and more cyber weapon platforms are being deployed, he ended up writing a rather disjointed post which invokes computer science in a way that just doesn’t make sense. The short version is that securing any unmanned weapons system is impossible due to the Halting Problem and the task of actually auditing what we can know about them is extremely expensive and time-consuming. I’ll give him the latter but definitely not the former since he invokes the concept incorrectly and tries to tie it to a scenario where it doesn’t really apply. Here’s how he tries to explain it…

It’d be great if someone could simply write some sort of universal software checker that sniffs out any program’s potential flaws. One small problem: Such a checker can’t exist. As computer science pioneer Alan Turing showed in 1936, it’s impossible to write a program that can tell if another will run forever, given a particular input. That’s asking the checker to make a logical contradiction: Stop if you’re supposed to run for eternity.

The logic here simply does not follow. Turing was trying to determine if there can be unsolvable problems in computer science and focused on trying to predict how to tell if a program would run forever given unlimited execution time and resources. When you do that, your algorithm will end up with logical contradictions on both possible results. But that’s a theoretical program which has infinite resources and time, certainly no program in the real world can really run forever or have as much memory as it wants, right? Exactly. It can’t because it would eventually be killed by the operating system to prevent a crash or crash the computer by hogging up too many resources. That places a limit on the number of states in which the software can be and the system itself is going to allow only certain inputs. And this means that it’s logical, and far more feasible, to focus on testing the software for what you know could happen on the system it calls home.

We can definitely do that when it comes to security with frameworks like Metaspoit and IMPACT, which can throw an entire library worth of known and potential exploits at a program and see if it breaks or yields to attackers. New hacks get added to the frameworks as time goes on and you can keep pounding away at your digital gates to see if anything breaks. While Shachtman had a few minutes to read about the Halting Problem, he seemed to have missed that there are a few software checkers in existence and they do a decent job at making sure software doesn’t have a lot of glaring vulnerabilities. They can’t check the programs for correctness and completion, but that’s ok because we don’t need them to. Knowing if a given input would cause a program to run forever or stop won’t tell you much about its vulnerabilities, especially since we know that it’s not possible to really run a real world program forever or accept infinite inputs.

And just as programs that have to run on real computers don’t have infinite resources or accept infinite inputs, real hackers can’t execute an infinite number of attacks, so the concern that was being deemed as impossible to address with nothing less than the Halting Problem actually does have a practical solution and theoretical computer science concepts are being mixed in with very mundane security issues that are tackled every day. I’m not sure if Shachtman knows that when he goes off into this theoretical realm, he’s talking about infinites and securing all software from all attacks for all time, not about a comprehensive model for testing unmanned combat systems from known and potential exploits identified by researchers and engineers. At the end of the day, this is what DARPA is trying to accomplish and nothing in computing prevents them from making it happen. If it did, we wouldn’t have antivirus suites, spam filters, or exploit frameworks…

# tech // computer science / cybersecurity / halting problem / military


  Show Comments