[ weird things ] | why quantity can so often overtake quality

why quantity can so often overtake quality

Administrators are drowning in papers and statistics, but they're still incentivizing scientists to keep producing a tsunami of marginally useful papers.
drowning in papers

For years now, doctors, scientists, and academics have been decrying the idea of applying metrics to virtually anything and everything, pumping out a steady stream of articles, posts, and even books explaining that trying to measure everything quantitatively either doesn’t work, or produces a very skewed picture of what’s going on in the real world. In this vein, here’s a duo of seemingly unrelated articles, one describing how an obsession with metrics and tallying up grants is adversely impacting science, and one describing how the very same obsession with metrics is ruining the tenure decision process by focusing on what’s measurable instead of evaluating what’s really important.

And both articles have a perfectly valid point. The size of a grant, how many papers you’ve written and in journals with what impact factors, won’t tell you how important your research is or why it matters to your research institution and the world at large. Without that meaning, your work is useless.

We’ve addressed why penny-pinching and playing it safe with science is a bad thing already, so it would be redundant to go over it yet again. But what we haven’t discussed yet is the problem with the articles which are desperately trying to highlight the problems with the worship of metrics rather than the quality of science or all the necessary intangibles of a good professor or a promising researcher. They ask serious questions and at times even try to provide answers, but where they’ll usually tend to miss the mark is in their lack of exploration into the psychology of metric-driven cultures and where it and good science part ways.

At first blush, you might assume that scientists, often driven by numbers, statistics, and evidence would be happy to jump on a notion that quantifies their accomplishments in solid, inarguable numbers. But that’s only good for incremental, well defined, and continuing research. Curiosity driven science, which tends to produce many of the most amazing innovations in human history, is often conducted in fits and starts, veering off into random directions along the way. To the scientists, that’s the way it should be. You try something new and if you fail you learn something in the process. In science, failure can be extremely beneficial. To a quant, each failure is a waste of money.

At the root of the problem is the fact that those who write the checks to fund experiments and new ideas need some sense of certainty about the odds of this research paying off. Numbers and metrics constructed around publications and citations in related studies give them a sense of solace, making them feel as if they made a safer bet because a particular academic wrote 132 papers, or something that sounds equally impressive. It’s easier to try and quantify quality by numbers of papers, citations, and impact factors rather than the merits of the science itself and how influential it is. After all, today’s academics racking up hundreds of papers may not be known for any of them while some obscure researcher’s single project could change the world.

By nature, science isn’t designed to adhere to simplistic metrics and that’s what scares the financial quants. Numbers, even big and meaningless ones that will tell them nothing but irrelevant and superficial tidbits, are their safety nets. In their minds, if they pad their decisions with big numbers, they could defend these decisions later. And to get their approval, scientists quickly learn to play the game, emphasizing how many papers they write and how many talks they give rather than the scope and applicability of their research to the real world, something that will necessarily rely on complex technical evaluations of the data which the quants have no time to study.

But while the quants feel safe and the MPU scientists are publishing the same things constantly refined with enough slight tweaks to warrant a new publication, shrouded by impressive numbers and metrics, avoiding the arduous and necessary task of measuring the real world impact of the research, they think they’re doing a very mature and responsible thing with their money. Instead of “wasting it” on long shorts by tinkerers, they’re giving money to continue existing research by accomplished scientists with long publication records.

But the reality of the matter is that they’re often accomplishing little more than encouraging risk aversion and outright mediocrity. Instead, their goal should be letting scientists, big and small, take risks and try new things. Rather than viewing every failure and bad idea as a waste of cash, they should treat it as a lesson to be leaned for all future proposals. And while we absolutely need to weed out complete crackpottery or blatantly over-ambitious concepts which lack a solid scientific base behind them, we also need to reward scientists for trying to take a bold step in their fields and peruse their curiosity. Otherwise, we’re spending a whole lot of money and getting little more than incremental refinement in return, refinement that can only take us so far…

# science // academic / economics / scientific method / scientific research


  Show Comments