Sometimes the part of popular science that’s the most difficult isn’t the science, but the word preceding it, the indication that what you’re writing should somehow be in the realm of popular interest. Sure, this often results in omissions and sensationalism in headlines and conclusions, but the point is that at least people do care, at least reading the articles and trying to think about scientific problems. But with some disciplines, it’s rather difficult to make something that ordinarily makes people snore exciting, and computer science is usually one of these topics, despite what the occasional robot demonstration would have you believe. Behind the scenes of an acrobatic flight or a coordinated robot shuffle are millions of lines of code, all pruned by people who will have to painstakingly analyze the time complexity of each algorithm, what tasks are being distributed to what processors, and how. This is why most robot sports are boring. The calculations take way too much time and while the Singularitarians and transhumanists are busy looking for robotic souls and machine bodies, rooms of the nerds they task with doing all the actual work, worry about time complexity and algorithm designs.
Hardly exciting huh? Were you to look at a computer science paper, you’d see a flood of discreet math in long and complex pseudocode, periodically followed by proofs of its estimated time complexity, a pseudocode that often tackles a problem that the vast majority of programmers really don’t have because they can always just scale up their hardware and cut down on the graphic components of the output. No need to implement some esoteric or complex workaround to boost performance by up to 15% when installing a new server could do an even better job of speeding up the application and eliminate the need to write a lot of new automated tests for the newly implemented code. Doesn’t exactly sound like front page material for Popular Science, right? Nerds argue about the best way to boost performance of applications running on a distributed network, click here for more details and to learn about asymptotic notation and proofs! Why do you think tech sites are a euphemism for gadget reviews and news, or breathless coverage of how some self-appointed grand guru of social media made a social startup that’s totally social and incubating social media friendly social spinoffs? Say, did I use the word “social” enough times in that sentence? It’s a really, really hot keyword for tech news searches, and my editor, who was just briefed by Huffington herself, said I need to use more keywords in my posts.
Or here’s another topic from actual computer science affairs. Using stored procedures to retrieve data from a database or an ORM, an object-relational mapping. Stored procedures are usually faster to execute and give you more control over what gets brought back since you write the SQL commands and they’re compiled in the database engine you’re using. Problem is that if you have a big system, you may find yourself writing lots and lots of stored procedures, and if you change database vendors, you might have to rewrite them all. ORM tools let you work with data from your code, but because the code is compiled in the application layer and goes to a database, retrieves the data you need, then sends it back, it imposes an overhead you could avoid with just a simple stored procedure. You’re also locking yourself into a tool to work with your database rather than a very basic management client which lets you customize what you want to do. Some programmers love abstracting the database mechanics into an ORM and feel relieved that they don’t have to write nearly as much SQL script in the future. Others like the control they get by specifying exactly what gets brought back and how, squeezing as much performance as we can out of existing tools, without buying new hardware or more bandwidth to let the processes of the ORM do their job without the slightest risk of network congestion, which can still happen even with lazy loading, a feature that tries to limit the ORM tools’ overhead during database hits.
Exciting, huh? But those are questions computer science tackles. Speed, performance, design, solving large and complex problems through an application of certain proven patterns and creating variations of an existing paradigm, often a slight tweak to make things easier to debug or code. Technophiles with their hearts set on the glorious future where machines a million times smarter than humans run everything should sit down with a computer scientist one day and talk about something as simple as mapping and let us know how close we are to the Nerd Rapture and the descent of the Great Machine after finding out how much it takes to teach any machine the difference between right and left. They don’t just memorize it like we do, they have to perform an elaborate set of simple calculations and a little trigonometry every time they’re faced with a fork in the road. To give a simple robot its bearings in a known environment takes hundreds of lines of code, code that’s parsed, scrutinized, and encoded in pages of asymptotic notations and pseudocodes, then possibly never used since it relies of some system or framework-specific trick to improve performance. And again, none of this is of any interest to the vast majority of popular science blog readers. It’s certainly of consequence, but the details just aren’t all that fun to discuss, and even for those who are amateur coders and who would certainly enjoy it, the academic formalism they’ll encounter and the quirks of their future workplaces which insist that something is to be done one way because that’s either the way they’ve always done it, or because it’s “a cutting edge tool a company on the cutting edge like us has to use,” take a toll on how excited they’ll be about what they do.