Last year, a study conducted by poly sci grad student Michael LaCour showed that just a simple conversation with a canvasser who talked to people about marriage equality and then identified as gay, was enough to sway minds towards the acceptance of same sex marriage. This was an odd result because people don’t tend to change their views on things like homosexuality after a brief conversation with a stranger, no matter how polite the stranger was. However, the data in the paper was very convincing and it may have been entirely possible that the people surveyed didn’t think about marriage equality and meeting a gay person who didn’t fit the toxic stereotype propagated by the far right, wanted to seem supportive to meet social expectations, or might’ve even been swayed off the fence towards equality. After all, the data was there, and it looked so convincing and perfect. In fact it looked a little too perfect, particularly when it came to just how many people seemed open to talking to strangers who randomly showed up at their doors, and how inhumanly consistent their voiced opinions have been over time. It was just… off.
When doing a social sciences experiment, the biggest stumbling block is the response rate and how small it usually is. Back in my undergrad days, I remember freezing my tail end off trying to gather some responses for a survey on urban development in the middle of an Ohio winter and collecting just ten useful responses in three hours. But LaCour was armed with money and was able to pay up to $100 for each respondent’s time unlike me, so he was able to enroll 10,000 or so people with a 12% response rate. Which is a problem because his budget would have had to have been over $1 million, which was a lot more than he had, and a 12% rate on the first try will not happen. Attempts to replicate it yielded less than a 1% response rate even when there was money involved. Slowly but surely, as another researcher and his suspicious colleagues looked deeper, signs of fraud mounted until the conclusion was inescapable. The data was a sham. Its stability and integrity looked so fantastically sound because no study was actually done.
New York Magazine has the details on how exactly the study came undone, and some parts of the story, held up in the comments as supposed proof of universities’ supposed grand Marxist-homosexual conspiracy to turn education into anti-capitalist and pro-gay propaganda as one is bound to expect, actually shine a light into why it took so long for the fraud to be discovered. It’s easy to just declare that researchers didn’t look at the study too closely because they wanted it to be true, that finding some empirical proof that sitting a homophobe down with a well dressed and successful gay person for half an hour would solve social ills was so tempting to accept, no one wanted to question it. Easy, but wrong. If you’ve ever spent time with academics or tried to become one in grad school, you’d know that the reason why it took exceptional tenacity to track down and expose LaCour’s fraud is because scientists, by in large, are no longer paid to check, review, and replicate others’ work. Their incentive is to generate new papers and secure grants to pay for their labs and administrators’ often outrageous salaries, and that’s it.
Scientists have always lived by the paradigm of “publish or perish,” the idea that if you publish a constant stream of quality work in good journals, your career continues, and once you stop, you are no longer relevant or necessary, and should quit. But nowadays, the pressure to publish to get tenure and secure grants is so strong that the number of papers on which you have a byline more or less seals your future. Forget doing five or six good papers a year, no one really cares how good they were unless they’re Nobel Prize worthy, you’re now expected to have a hundred publications or more when you’re being considered for tenure. Quality has lost to quantity. It’s a one of the big reasons why I decided not to pursue a PhD despite having the grades and more than enough desire to do research. When my only incentives would be to churn out volume and try to hit up DARPA or the USAF for grant money against another 800 voices as loud and every bit as desperate to keep their jobs as mine, how could I possibly focus on quality and do bigger and more ambitious projects based on my own work and current literature?
And this is not limited to engineering and hard sciences. Social science has the same problems as well. Peer review is done on a volunteer basis, papers can coast through without any critical oversight, fraud can go unnoticed and fester for years, and all academic administrators want to do is to keep pushing scientists to churn out more papers at a faster and faster rate. Scientists are moving so quickly, they’re breaking things and should they decide to slow down and fix one of the things that’s been broken, they get denied tenure and tossed aside. Likewise, whose who bring in attention and money, and whose research gets into top tier journals no matter how, get a lot of political pull, and fact checking their research not only interferes with the designated job of cranking out new papers in bulk, it also draws ire from the star scientists in question and their benefactors in the administration, which can cost the fact checkers’ their careers. You could not build a better environment to bury fraud than today’s research institutions unless you started to normalize bribes and political lobbyists commissioning studies to back their agendas.
So scientists didn’t check LaCour’s work not because they wanted to root for gay marriage with all their hearts as they were brainwashed by some radical leftist cabal in the 1960s, they didn’t check his work because their employers give them every possible incentive not to unless they’ll stumble into it when working with the same exact questions, which is actually what happened in Broockman’s case when he stumbled on evidence of fraud. And what makes this case so very, very harmful is that I doubt that LaCour is such a staunch supporter of gay rights to commit the fraud he has in the name of marriage and social equality. He just wanted to secure his job and did it by any means he thought necessary. Did he give any thought how his dishonesty impacts the world outside of academia? Unlikely. How one’s work affects the people outside one’s ivory tower is very important, especially nowadays when scientists are seen as odd, not quite human creatures isolated from everyday reality by an alarming majority of those exposed to their work, and who will be faulted for their colleagues’ shortcomings or dishonesty en masse.
Now, scientists are well aware of the problem I’ve been detailing, and there is a lot of talk about some sort of post-publication peer review, or even making peer review compensated work, not just something done by volunteers in their spare time with the express purpose of weeding out bad papers and fraud. But that’s like trying to cure cancer by treating just the metastatic tumors rather than with aggressive ressection and chemotherapy. Instead of measuring the volume of papers a scientist has published, we need to develop metrics for quality. How many labs found the same results? How much new research sprang from these findings based not only on direct citation count, but citations of research which cite the original work? We need to reward not the ability to write a lot of papers but ambition, scale, and accuracy. When scientists know that a big project and a lot of follow up work confirming their results is the only way to get tenure, they will be very hesitant to pull off brazen frauds since thorough peer review is now one of the scientists’ most important tasks, rather than an afterthought in the hunt for more bylines…