when academia is behind the real world
In most fields, bleeding edge research is conducted in academic labs, not corporate bullpens. In computer science, that's often not the case.
Computer science is a bizarre field. Normally, academics doing research are way ahead of what’s being implemented for John and Jane Q. Public since they have the most advanced technology and the funding to figure out how to make something work. However, in computer science, that’s not always a given and industry often solves complicated problems you’d expect would be tackled by academics because it doesn’t really have a choice. Users expect a certain set of functionality a certain scale and they’re not going to wait for a grant application to be written, denied, re-written, approved, a lab set up to experiment, papers to be written, reviewed, re-written, then licensed out to a large company to try and turn into a viable product. They want it in a year max, and whatever problems you need solved, you better solve.
And this leads to some really obvious disconnects between the proverbial academic Ivory Tower as researchers and industry experts don’t talk. Over the years, I’ve noted a few of them like failing to keep up with the evolution of object oriented programming and obstinately refusing to see industry as an appropriate destination for their students, not to mention their slowness in tackling the basic problems with being a researcher today. I could write a much more thorough list, but Google’s Matt Welsh already beat me to it in his thorough post on the subject. The biggest takeaway? Research labs can, and routinely do produce cool stuff. However, academics seldom make a real, complete product, only a proof of concept, which means the code is way too often sloppy, and the solution they present may not be workable beyond the experiment they published in a paper.
Speaking of papers, comp sci academics often don’t seem to realize that just because there isn’t a paper on a problem doesn’t mean no one has solved it, just that the team which did may not have had the time or foresight to write one because they need to get the finished product out the door. Overall, the message those in the industry want to convey is that a) academia needs to worry less about who invented or wrote up what when picking problems to solve, b) the industry has something to teach researchers about coding and scaling, and c) labs should not assume that a problem needs solving before talking to industry to see that has been tried in the field, what worked, what failed, and why. As of right now, we have too many nifty solutions looking for problems, and approaches that don’t scale to industry needs. Part of this is clearly a symptom of the academic blight that is quantification of tenure, but the lack of communication between researchers and practitioners is just as big of an issue that may be sending academia to do needless work.
So what does this mean for users and programmers? Well, for the layperson it’s a sign to take news about “revolutionary approaches in computing” with a grain of salt. For programmers, it’s a sign that they should keep looking to giants like Google, Facebook, and Twitter for leadership in approaches when it comes to usability, scale, and high level optimizations, as well as testing proofs of concept from academia, and for students, to take professors’ claims that industry doesn’t innovate much and is not the place for revolutionary ideas or large scale research unlike academia, with a whole lot of skepticism. Because as those of us actually trying to implement machine learning and systems with hundreds of millions, if not billions of users have seen, that’s not really the case in the real world…