[ weird things ] | no, computers can’t be trusted with math’s future

no, computers can’t be trusted with math’s future

A widely covered article about math confuses readers by equating volume of proof and code with quality of proof and code to imply that computers will take over math as a discipline.
i heart math

Sometimes it’s hard to decide whether an article asking about the role of computers in research is simply click bait that lures readers to disagree and boost views, or a legitimate question that a writer is trying to investigate. In this case, an article on Wired about the future of math focused ever more on computer proofs and algorithms asks whether computers are steamrolling over all human mathematicians because they can calculate so much so quickly, then answers itself with notes on how easily code can be buggy and proofs of complex theorems can go wrong. Maybe the only curious note is that of an eccentric mathematician at Rutgers who credits his computers on his papers as co-authors, and his polar opposite, an academic who eschews programming to such an extent that he delegates problems requiring code to his students, thinking it’s not worth his time to bother learning the new technology. It’s a quirky study in contrast, but little else.

But aside from the obvious answers and problems with the initial questions, a few things jumped out at me. I’m not a mathematician by any stretch of the imagination. My software deals with the applied world. But nevertheless, I’m familiar with how to write code in general and when there’s a mathematical proof that takes 50,000 lines of code being discussed, my first thought is how you could possibly have that much code to test one problem. The entire approach seems bizarre for what sounds like an application of graph theory that shouldn’t take more than a few functions to implement, especially in a higher level language. And this is not counting the 300 pages of the proof’s dissection, which again seems like tacking the problem with a flood of data rather than a basic understanding of the solution’s roots. In this case, the computer seemed like it was aiding and abetting a throw-everything-and-the-kitchen-sink-at-it methodology, and that’s not good.

When you use computers to generate vast reams of data where a solution may be hiding or just recording what it said after it ran a program you designed, you might get the right answer. The catch is that you’re never going to be sure unless you can solve the problem itself or come very close to the real answer and just need the computer to scale up your calculations and fill in most of the decimal spaces you know need to be there. After all, computers were designed for doing repetitive, well-defined work that would take humans far too long to do and in which missing an insignificant detail would quickly throw everything off by the end. They are not thinking machines and they rely on a programmer really knowing what’s going on under the hood to be truly useful in the academic field. Otherwise, mathematics could end up with 300 pages and 50,000 lines of code for one paper and two pages of computer printouts for another. And both extremes would get us nowehere pretty fast without a human who knows how to tackle the real problem…

# tech // academia / computer science / computers / mathematics


  Show Comments