Archives For scientific research

paper war

Social science gets a bad rap because not only does it sometimes make us confront some very ugly truths about human nature, its studies can be very difficult to reproduce, so much that the undertaking in doing just that found they couldn’t get the same results as more than half of the papers they tried to replicate. But ironically enough, an effort to replicate the replication did not succeed either. For those who are having trouble following this, let’s recap. Researchers trying to figure out how many social science papers can be reproduced didn’t conduct a study others were able to reproduce themselves. That’s a disaster on a meta level, but apparently, it’s more or less to be expected due to the subject matter, measurement biases, and flaws involved. In a study challenging the supposedly abysmal replication rate known as the Replication Project, it’s quickly evident that the guidelines by which the tested studies failed were simply too rigid, even going so far as to neglect clearly stated uncertainty and error margins, and choosing to perform some experiments using different methods than the papers they were trying to replicate.

Had the Replication Project simply followed the studies carefully and included the papers’ error bars when comparing the final results, it would have found over 70% of the replication attempts successful. That sounds not that great either, with more than one in four experiments not really panning out a second time, but that’s the wrong way to think about it. Social sciences are trying to measure very complicated things and they won’t get the same answer every time. There will be lots and lots of noise until we uncover a signal, and that’s really what science does. Where a quantification-minded curmudgeon sees failed replication attempts, a scientist sees failures that can be used as a lesson in what not to do when doing a future experimental design. It would’ve been great to see the much desired 92% successful replication rate the Replication Project set as the benchmark, but that number reduced the complexity of doing bleeding edge science that often needs to get it wrong before it gets it right, to the equivalent of answering questions on an unpleasantly thorough pop quiz. Add the facts that the project’s researchers refused to account for something as simple as error bars when rendering their final judgments, and that they would one in a while neglect to follow the designs they were testing, and it’s difficult to trust them.

Where does this leave us? Well, there is a replication problem in social sciences, so much that studies claiming to be able to measure it are themselves flawed and difficult to replicate. There are constant debates about which study got it right and which didn’t, and we can choose to see this as a huge problem we have to tackle to save the discipline. Or we can remember that this back and forth on how well certain studies hold up over time and whose paper got it wrong and whose got it right are exactly what we want to see in a healthy scientific field. The last thing we want is researchers not calling out studies they see as flawed because we’re trying to find how people think and societies work, not hit an ideal replication benchmark. It’s part of that asinine, self-destructive trend of quantifying the quality out of modern scientific papers by measuring a bunch of simple, irrelevant, or tangential metrics to determine the worth of the research being done and it really needs to stop. Look, we definitely want lots of papers we can replicate at the end of the day. But far more importantly than that, we want to see that researchers are giving it their best, most honest, most thorough try, and if they fail to prove something or we can’t easily replicate their findings, it could be even more important than a positive, repeatable result.

rage and fury

While my play-along-at-home AI project hit a little snag as I’m still experimenting with using the newest cross-platform version of a framework which might not be ready for prime time just yet, why don’t we take a look at the huge controversy surrounding the open journal PLOS ONE and why no matter how it all happened, the fact that it did is alarming? If you don’t know, the journal has been savaged on social media by scientists for publishing a Chinese study on the dexterity of the human hand with an explicit reference to God in the abstract. Some reactions have been so over the top that an ardent creationist watching form the sidelines could collect the outraged quotes and use them in a presentation on how scientists get reflexively incensed when anyone brings up God because they’re all evil atheists who can’t even bear to hear him invoked. But at its core, the scientists’ outrage has less to do with the content of the paper than it has with how badly broken the peer review mechanism is in a world of publish-every-day-or-perish, when the tenure committee that decides your fate scoffs at anything less than 100 papers in journals…

For what it’s worth, the paper’s writers say that they flubbed their translation into English and a reference to “the Creator” was really supposed to say evolutionary nature. I’m not sure if that’s true because while on the surface, China is an atheistic country, there are plenty of Christians who live there, and the rest of the paper’s English seems perfectly fine and proper. The capital reference seems too deliberate to just be there as a mistake, almost like someone deliberately snuck it in and the team is now covering for this investigator by faking sudden bouts of Engrish in a paper that doesn’t actually suffer from any. Obviously there’s no prohibitions for a scientist to be religious and conduct exemplary research. Francis Collins is a devout Evangelical whose free time was spent preaching Templeton’s gospel of accommodationism, but his work with the Human Genome Project is critical in modern biology. Ken Miller is a devoted Catholic, but he’s tirelessly kept creationism out of classrooms in the Midwest and separates his personal beliefs from his scientific work and advocacy. And that’s what all scientists of faith try to do, maintain a separation between religion and work in the public eye, and when they fail, an editor should be there to review the papers and point that out before publishing it for public consumption.

So that’s what the fuming on social media is all about: the lack of editorial oversight. Scientists who wanted to submit their research to PLOS ONE, or already have, are now worried that it will be considered a junk journal, and their time and effort publishing there will be wasted. Not only that, but they’re worried about the quality of papers they cited from the journal as well since an editorial failure during peer review means that outright fraud can go undetected and take huge professional risks by other scientists to uncover. Since peer review is supposed to keep a junk paper out of a good journal by pointing out every design flaw, obvious bias, cherry-picking, and inconsistencies that signal fraud or incompetence, and it’s the only mechanism that exists to do so before publication, any signs that the reviewers and editors are asleep at the wheel, or only going through the motions, is incredibly alarming to scientists. Yet, at the same time, I can sort of understand why this kind of thing happens. Reviewers are the gatekeepers of what qualifies as scientific literature and their job is to give scientists hell. But they’re not paid for it, their work for journals is not appreciated very much, and despite their crucial role in the scientific process, the fact of the matter is that they’re volunteers doing a thankless task out of a sense of duty.

While the popular perception of a scientist is that research is a cushy gig, the reality is that the majority of scientists are overworked, underpaid, and expected to hand out their time for free in the service of highly profitable journals that charge and arm and a leg to publish scientists’ own content. Any person, no matter how passionate or excited about his or her work, is not going to be extremely motivated and exceedingly thorough under these circumstances. Until we start to properly appreciate reviewers for their work and rewarding them for it, and until colleges finally realize that it’s dangerous and ridiculous to encourage scientists to write ten papers where one good one would’ve sufficed, mistakes like PLOS ONE’s are just going to keep happening as the review takes place in social media rather than by writers and editors, like it should’ve been. We can’t expect quality from peer review in the future if we’re not willing to make the task logistically reasonable and professionally appreciated, much like we shouldn’t expect to walk into any used car dealership and drive off in a brand new Ferrari for the price of an old Kia. Like with so many things in life, you get what you pay for when someone has to work for your benefit.

head in sand

Here’s an extremely uncomfortable truth no one currently running for office in the U.S., or even remotely considering doing so ever wants to publicly admit. There are a lot of voters who really, really don’t like experts, scientists, or anyone well educated in anything other than medicine. In their eyes, any sign of intellectualism is not something to cheer or aspire to, to them it’s nothing more than pretension from someone they’re convinced thinks he or she is better than them and feels entitled to tell them what to do. At the same time, they’re extremely paranoid that they will have something valuable or important taken away from them to be given to all the undeserving moochers on lower socioeconomic rungs than they are, convinced that the American poor have already been living it up with free spending money, free food, and free world-class medical care for decades. So when a politician decides to cozy up to this constituency, his best bet is to start witch hunts against their most nightmarish moochers: government-funded scientists.

In his tenure as the chairman of the House Science and Technology Committee, a haven for a disturbing number of peddlers of anti-scientific twaddle, congressman Lamar Smith decided to do exactly that with his open-ended fishing expeditions into every possible aspect of scientists’ research in his quest to find some grand conspiracies to publicly squash for his science-averse, paranoid base’s delight. In his investigation of climate scientists working for NOAA, he specified absolutely no instances of misconduct he thinks occurred, only asked for ever more raw data to be provided to him, even though the data and the methods used to analyze it have been on the web for years, provided by NOAA to anyone even slightly curious. But data is not what Smith is really after, because he has no interest in the actual science. He and his donors are upset that updated data for atmospheric warming gathered from additional sources after years of looking over more and more observation stations eliminated the “pause” to which denialists cling. Since the only possibility in their minds is that the data is faked, they want evidence of fakery.

Really there’s no other way to put it. Smith wants to have private communications between the scientists funded by NOAA to create another Climategate, which denialists are still convinced is an actual scandal despite the scientists being cleared of any wrongdoing, and if he doesn’t find something badly worded when taken out of context, or something politically incorrect, he will be taking something he doesn’t understand — which is likely most of the things being discussed by climatologists — and is being paid by oil and gas lobbies to continue not understanding, way out of context and manufacture a scandal out of that. When the chairman of the science committee which decides on funding for countless basic research projects his nation needs to maintain the top spot for scientific innovation in the world thinks his job is to harass scientists he doesn’t like because his donors’ business may be adversely impacted by their findings, until some pretense to interrogate them comes up, no matter how flimsy, we have a very serious problem. While all abuses of power are bad, abuses by partisan dullards have a certain awfulness about them, as they ridicule when they seem to utterly lack the capacity to understand in the first place

math prodigy

According to overenthusiastic hacks at Wired, scientists have recently developed a way to scan your brain to predict just how intelligent someone is or how good you’ll be at certain tasks. This sounds like the beginning of a dystopian nightmare, rather than an actual field of research, that will end up with mandatory brain scans for everyone to “facilitate an appropriate job function” in some dark, gray lab in front of medical paper pushers, true. But it only sounds like this because the writer is more interested in page views than the actual study, which really has nothing to do with one’s intelligence but actually tested whether you could identify someone by scanning how this person’s brain is wired. Rather than trying to develop IQ tests in a box, the researchers put the theory that your brain wiring is so unique that getting a map of it could identify you every bit as well as a fingerprint, to the test. Not surprisingly, they found that a high quality fMRI scan of your brain at work performing some standard tests can definitely be used to identify you.

All right, that’s all fine and well, after all, the fMRI scan is basically giving you insight into unique personalities, and no two people’s brains will work the same way. But where exactly would this whole thing about measuring intelligence come into play? Well, the concept of fluid intelligence, mentioned only three times in the study, was brought up as an additional avenue of research in light of the findings and revolves around the idea that certain parts of the brain having a strong connection will make you notably better at making inferences to solve new problems. Unlike its counterpart, crystallized intelligence (called Gc in neuroscience), fluid intelligence (or Gf) is not what you know, but how well you see patterns and come up with ideas. Most IQ tests today are heavily focused on Gf because it’s seen as a better measure of intelligence and the elaboration on what exactly the fingerprinting study had to do with predicting Gf was an extended citation of a study from 2012 which found a link between the lateral prefrontal cortex’s wiring to the rest of the brain and performance standardized on tests designed to measure Gf in 94 people.

Here’s the catch though. Even though how well your lateral prefrontal cortex talks to the rest of your brain does account for some differences in intelligence, much like your brain size, it really only explains 5% of these differences. Current theory holds that because your prefrontal cortex functions as your command and control center, what Freud described as the ego, a strong link between it and several other important parts of the brain will keep you on task and allow you to problem-solve more efficiently. Like a general commanding his troops, it makes sure that every other relevant part of your mind is fully engaged with the mission. But even if that theory is right and your preforntal cortex is well wired in a larger than median brain, close to 90% of what you would score on an IQ test can come down to level of education and other factors that generally make household income and education a better predictor of IQ scores than biology. Although in many ways it’s not that accurate either because style of learning and culture also play a role. All we can conclude is that the interplay between Gf, Gc, and education is very complex.

We should also take note of one study of popular theories of biological contributors to Gf which spanned 44,600 people and found no evidence that a combination of fMRI maps has predictive power when it comes to IQ points. In other words, we have a lot of ideas that seem plausible as to the biological origins of intelligence, but because our brains are very plastic, we are not all on a level playing field when it comes to the amount and quality of education we receive, and even our longest-running efforts for accurate Gc assessments have shown that we’re really bad at it, studies that claim predictive powers when it comes to our IQs using brain scans of 100 college students or fewer are extremely likely overselling their results. Not only that, but even when the studies do actively oversell, they still claim to explain only a tiny fraction of the score differences because they recognize how small and homogeneous their data sets really are. Not only do we not have an fMRI based tests for intelligence, we’re not even sure it’s possible. But those facts bring in far, far fewer page views than invoking kafkaesque sci-fi lore in a pop sci post…

eye of providence scroll

For as long as there have been conspiracy theories, there have been explanations for why the vast community of people who hang on conspiracy theorists’ every word exist. Some might just be paranoid in general. Others may be exercising their hatred or suspicion of a particular group of people, be they an ethnic group or a political affiliation. Others might just want to sound as if they’re smarter and more incisive than everyone else. Others still seek money and attention in their pursuit of a stable career of preaching to the tinfoil choir. But that doesn’t answer the really big question about the constant popularity of conspiracy theories throughout the ages. Is there something specific about how the believers are wired that makes the more prone to believe? Is ascribing to 9/11 Trutherism, or fearing Agenda 21, or looking for alien ancestry in one’s blood actually a case of a brain generally seeing patterns in randomness and conspiracy theories are just an outlet waiting to tap into this condition? Swiss and French researchers recently decided to try and answer that question by experimenting on college students and the public.

First, they evaluated whether their test subjects would detect patterns in truly random coin flips and doctored ones, with and without priming them. Then, they would ask political questions to measure the degree of conspiratorial thinking and level of belief in popular theories such as the notion that the Moon landing was faked or 9/11 was an inside job of some sort. Obviously, they found that they more conspiratorial view of politics the subjects took, they more likely they were to be Moon hoaxers and 9/11 Truthers, but paradoxically, that had absolutely no reflection on if they claimed to see human interference in random patterns of coin flips or identify sequences a researcher manipulated, priming or no priming. In other words, in everyday, low level tasks, the mind of a conspiracy theorist doesn’t see more patterns in randomness. As the researchers put it themselves, for a group of people who like to say that nothing happens by accident, they sure don’t think twice if something apolitical and mundane has been randomly arranged.

What does this finding mean in the grand scheme of things? Well, for one it means that there’s really no one type of person just wired for conspiratorial thinking or whose brain wiring plays an important role in ascribing to conspiracy theories. Instead, it’s more likely that all these theories are extreme manifestations of certain political beliefs or personal fears and dislikes, so the best predictor of being part of the tinfoil crowd is political affiliation. It’s not too terribly surprising if we consider that most climate change denialists who fear some sort of implementation of a sinister version of Agenda 21 they imagined exists are on the far right, while those terrified of anything involving global vaccination or commercial agreements are on the far left. And while there are a few popular conspiracy theories that overlap because people are complex and can hold many, many views even if they are contradictory, you can separate most of the common theories into ones favored by conservatives and ones favored by liberals. And as for what biology is involved in that, well, that’s been a minefield of controversy and statistical maelstroms for a long time…

dingy lab

About a month ago, health and science journalist Christie Aschwanden took on the tough job of explaining why, despite a recent rash of peer review scandals, science isn’t broken by showing how media hype and researchers’ need to keep on publishing make it seem as if today’s study investigating something of general interest will be contradicted by tomorrow’s, if not shown as a complete and utter fraud. It’s nothing you really haven’t heard before if you follow a steady diet of popular science and tech blogs, although her prescription for dealing with whiplash inducing headlines from the world of science is very different from that of most science bloggers. As she puts it, we should simply expect that what we see isn’t necessarily the whole story and carefully consider that the scientists who found a positive result were out to prove something and might be wrong not because they’re clueless or manipulative, but because they’re only human.

Now, while this is all true, it’s extremely difficult not to notice that in today’s academic climate of obscenely overpaid college bureaucrats publishing scientists to publish countless papers just to be considered for a chance to keep working in their scientific fields after their early 40s, there’s incessant pressure to churn out a lot of low quality papers, then promote them as significant for anyone to cite them. Even if you published a very vague, tenuous hypothesis-fishing expedition just to pad your CV and hit the right number to keep the funding to your lab going, there’s plenty of pressure to drum up media attention by writers guaranteed to oversell it because if you don’t promote it, it will get lost among a flood of similar papers and no one will cite it, meaning that an extra publication won’t help you as much when the tenure committee decides your fate because its low quality will be evident by the complete lack of attention and citations. Long gone are days of scientists routinely taking time to let ideas mature into significant papers, and that’s awful.

Instead of realizing that science is a creative process which needs time and plenty of slack as it often bumps into dead ends in search of important knowledge, colleges have commoditized the whole endeavor into a publication factory and judge researchers on how they’re meeting quotas rather than the overall impact their ideas have on the world around them. Sure, they measure if the papers have been cited, but as we’ve seen, it’s an easily gamed metric. In fact, every single measure of a scientist’s success today can be manipulated so good scientists have to publish a lot of junk just to stay employed, and bad scientists can churn out fraudulent, meaningless work to remain budgetary parasites on their institutions. Quantity has won over quality, and being the generally very intelligent people that they are, scientists have adapted. Science is not broken in the sense that we can no longer trust it to correct itself and discover new things. But it has been broken the way it’s practiced day to day, and it will not be fixed until we go back to the day when the scope and ambition of the research is what mattered, rather than the number of papers.

relativity formulas

In a quote often credited to Albert Einstein, the famous scientist quips that if you can’t explain a concept to a six year old, you clearly don’t understand it yourself. Now, it may take a very bright six year old to truly comprehend certain concepts, but the larger point is perfectly valid and can be easily proven by analyzing the tactics of many snake oil salespeople hiding behind buzzword salads to obscure the fact that they’re just making things up on the spot. If you truly understand something, you should be able to come up with a very straightforward way to summarize it, as it was done here in a brilliant display of exactly this kind of concept. But sadly, scientists are really bad at straightforward titles for their most important units or work, their papers. Countless math, physics, computer science, and biology papers have paragraph-length titles so thick with jargon that they look as if they were written in another language entirely. And that carries a steep price, as a recent study analyzing citations of 140,000 scientific papers over six years shows.

You see, publishing a paper is important but it’s just half the work. The second crucial part of a scientist’s work is to get that paper cited by others in the field. The more prominent the journal, the more chances for citations, and the more citations, the more important the research is seen which means speaking gigs and potential applications for fame and profit. But as it turns out, it’s not just the journal and the work itself that matters. Shorter titles are objectively better and yield more citations because scientists looking at long, complicated titles get confused and won’t cite the research, unsure if anything in it actually applies to them. Quality of the work aside, the very fact that other experts can’t tell what you’re going on and on about is bad for science, leading to even more people doing the same work from scratch. To truly advance, science needs to build on previous work and if the existing work seems to be an odd fragment of alien gibberish at first glance, no one will review it further. So next time you write a scientific paper, keep its title short, sweet, and to the point. Or no one will read it, much less cite it as important to the field.

experimental plant

Several years ago, scientists at the sustainable farming research center Rothamstead decided to splice a gene from peppermint into wheat to help ward off aphid infestations. You see, when hungry adult aphids decide it’s time for a snack, the essential oil given off by peppermint mimics a danger signal for the insect. Imagine trying to bite into your sandwich just as a fire alarm goes off over your head with no end in sight. That’s exactly what happens to aphids, and the thought was that this ability could be spliced into wheat to reduce pesticide use while increasing yield. It should also be noted that Rothamstead is non-profit, the research initiative was its own and no commercial venture was involved in any way, shape or form. Sadly, the test crops failed to live up to their expectations and deter aphids with the pheromone they produced, EβF. Another big, important note here is that despite the scary name, this is a naturally occurring pheromone you will find in the peppermint oil recommended by virtually every organic grower out there.

Of course, noting the minor nature of the genetic modification involved, the total lack of a profit motive on the part of a highly respected research facility, the sustainability-driven thinking which motivated the experiment, and the fact that the desired aphid repellent was derived from a very well known, natural source, anti-GMO activists decided that they wanted to destroy test crops in more mature stages of the research anyway because GMOs are bad. No, that was the excuse. Scientists planting GMO plants? They obviously want to kill people to put money in Monsanto’s pockets with evil Frankenfoods. With the experiment failing, they’re probably celebrating that all those farmers trying to protect their wheat lost a potential means of doing so and they won’t be driving to the research plots in the middle of the night to set everything on fire. The group which planned to carry out this vandalism, like many other anti-GMO organizations, lacks any solid or scientifically valid reason to fear these crops, and was acting based solely on its paranoia.

Indeed, anti-GMO activism is basically the climate change denial of the left. It revolves around a fear of change and bases itself on fear-mongering and repeating the same debunked assertion after another ad nauseam, with no interest in debate and even less in actually getting educated about the topic at hand. While anti-GMO zealots rush to condemn any Big Ag study showing no identifiable issues with GMO consumption on any criticism they can manage, real or imagined, with no study ever being good enough, they cling to horrifically bad papers created by scientists specifically trying to pander to their fears, who threaten to proactively sue any critics who might ruin the launch party for their anti-GMO polemics. Had Big Ag scientists done anything remotely like that, the very same people singing praises to Séralini would have demanded their heads on the chopping block. Hell, they only need to know they work in the industry to declare them parts of a genocidal New World Order conspiracy. But you see, because these activists are driven by fear and paranoia, to them it’s ok to sabotage the safety experiments they demanded to assure that scientists can’t do their research, while praising junk pseudoscience meant to bilk them.

paper crowd

Amazon’s Mechanical Turk lets you assign menial, yet attention-intensive tasks to actual human beings, despite the name’s ambiguity, and those humans want to be paid consistently and a fair fee for their efforts. This is why in March of last year, they launched the Dynamo platformwhich allows them to warn each other of bad clients who were stingy or unreasonable. The brainchild of Stanford PhD student Niloufar Salehi, who wanted to study digital labor rights, it came about in large part due to many of those stingy, unfair clients being academics. With small budgets for surveys and preparing complex machine learning algorithms, researchers were often paying an insultingly token sum to the workers they recruited, something Dynamo argues hurts the quality of their research by limiting their labor pool to the truly desperate and ill-qualified in its rules and guidelines for ethical academic requests for inquiring researchers looking for assistance.

It’s hard to know what’s worse, the fact that we give so little funding to researchers they have to rely on strangers willing to work for scraps, or that academics are fine with the notion of paying the equivalent of prison odd job wages to their remote assistants. Part of the problem is that the issues are interdependent. Many academics can’t afford to pay more and still meet their targets for sufficient survey responses or machine learning algorithms’ training set sizes. Turkers most qualified for the job can’t afford to accept less than 10 cents a minute, which doesn’t sound like much, until you realize that 15,000 units of work taking half an hour come out to $45,000 or so, a hefty chunk of many grad students’ budgets. Something’s gotta give and without more money from universities and states, which is highly unlikely, academics will either keep underpaying the crowds they recruit, or end up doing less ambitious research, if not less research in general…

rainbow flag splash

Last year, a study conducted by poly sci grad student Michael LaCour showed that just a simple conversation with a canvasser who talked to people about marriage equality and then identified as gay, was enough to sway minds towards the acceptance of same sex marriage. This was an odd result because people don’t tend to change their views on things like homosexuality after a brief conversation with a stranger, no matter how polite the stranger was. However, the data in the paper was very convincing and it may have been entirely possible that the people surveyed didn’t think about marriage equality and meeting a gay person who didn’t fit the toxic stereotype propagated by the far right, wanted to seem supportive to meet social expectations, or might’ve even been swayed off the fence towards equality. After all, the data was there, and it looked so convincing and perfect. In fact it looked a little too perfect, particularly when it came to just how many people seemed open to talking to strangers who randomly showed up at their doors, and how inhumanly consistent their voiced opinions have been over time. It was just… off.

When doing a social sciences experiment, the biggest stumbling block is the response rate and how small it usually is. Back in my undergrad days, I remember freezing my tail end off trying to gather some responses for a survey on urban development in the middle of an Ohio winter and collecting just ten useful responses in three hours. But LaCour was armed with money and was able to pay up to $100 for each respondent’s time unlike me, so he was able to enroll 10,000 or so people with a 12% response rate. Which is a problem because his budget would have had to have been over $1 million, which was a lot more than he had, and a 12% rate on the first try will not happen. Attempts to replicate it yielded less than a 1% response rate even when there was money involved. Slowly but surely, as another researcher and his suspicious colleagues looked deeper, signs of fraud mounted until the conclusion was inescapable. The data was a sham. Its stability and integrity looked so fantastically sound because no study was actually done.

New York Magazine has the details on how exactly the study came undone, and some parts of the story, held up in the comments as supposed proof of universities’ supposed grand Marxist-homosexual conspiracy to turn education into anti-capitalist and pro-gay propaganda as one is bound to expect, actually shine a light into why it took so long for the fraud to be discovered. It’s easy to just declare that researchers didn’t look at the study too closely because they wanted it to be true, that finding some empirical proof that sitting a homophobe down with a well dressed and successful gay person for half an hour would solve social ills was so tempting to accept, no one wanted to question it. Easy, but wrong. If you’ve ever spent time with academics or tried to become one in grad school, you’d know that the reason why it took exceptional tenacity to track down and expose LaCour’s fraud is because scientists, by in large, are no longer paid to check, review, and replicate others’ work. Their incentive is to generate new papers and secure grants to pay for their labs and administrators’ often outrageous salaries, and that’s it.

Scientists have always lived by the paradigm of “publish or perish,” the idea that if you publish a constant stream of quality work in good journals, your career continues, and once you stop, you are no longer relevant or necessary, and should quit. But nowadays, the pressure to publish to get tenure and secure grants is so strong that the number of papers on which you have a byline more or less seals your future. Forget doing five or six good papers a year, no one really cares how good they were unless they’re Nobel Prize worthy, you’re now expected to have a hundred publications or more when you’re being considered for tenure. Quality has lost to quantity. It’s a one of the big reasons why I decided not to pursue a PhD despite having the grades and more than enough desire to do research. When my only incentives would be to churn out volume and try to hit up DARPA or the USAF for grant money against another 800 voices as loud and every bit as desperate to keep their jobs as mine, how could I possibly focus on quality and do bigger and more ambitious projects based on my own work and current literature?

And this is not limited to engineering and hard sciences. Social science has the same problems as well. Peer review is done on a volunteer basis, papers can coast through without any critical oversight, fraud can go unnoticed and fester for years, and all academic administrators want to do is to keep pushing scientists to churn out more papers at a faster and faster rate. Scientists are moving so quickly, they’re breaking things and should they decide to slow down and fix one of the things that’s been broken, they get denied tenure and tossed aside. Likewise, whose who bring in attention and money, and whose research gets into top tier journals no matter how, get a lot of political pull, and fact checking their research not only interferes with the designated job of cranking out new papers in bulk, it also draws ire from the star scientists in question and their benefactors in the administration, which can cost the fact checkers’ their careers. You could not build a better environment to bury fraud than today’s research institutions unless you started to normalize bribes and political lobbyists commissioning studies to back their agendas.

So scientists didn’t check LaCour’s work not because they wanted to root for gay marriage with all their hearts as they were brainwashed by some radical leftist cabal in the 1960s, they didn’t check his work because their employers give them every possible incentive not to unless they’ll stumble into it when working with the same exact questions, which is actually what happened in Broockman’s case when he stumbled on evidence of fraud. And what makes this case so very, very harmful is that I doubt that LaCour is such a staunch supporter of gay rights to commit the fraud he has in the name of marriage and social equality. He just wanted to secure his job and did it by any means he thought necessary. Did he give any thought how his dishonesty impacts the world outside of academia? Unlikely. How one’s work affects the people outside one’s ivory tower is very important, especially nowadays when scientists are seen as odd, not quite human creatures isolated from everyday reality by an alarming majority of those exposed to their work, and who will be faulted for their colleagues’ shortcomings or dishonesty en masse.

Now, scientists are well aware of the problem I’ve been detailing, and there is a lot of talk about some sort of post-publication peer review, or even making peer review compensated work, not just something done by volunteers in their spare time with the express purpose of weeding out bad papers and fraud. But that’s like trying to cure cancer by treating just the metastatic tumors rather than with aggressive ressection and chemotherapy. Instead of measuring the volume of papers a scientist has published, we need to develop metrics for quality. How many labs found the same results? How much new research sprang from these findings based not only on direct citation count, but citations of research which cite the original work? We need to reward not the ability to write a lot of papers but ambition, scale, and accuracy. When scientists know that a big project and a lot of follow up work confirming their results is the only way to get tenure, they will be very hesitant to pull off brazen frauds since thorough peer review is now one of the scientists’ most important tasks, rather than an afterthought in the hunt for more bylines…