Archives For science

paper war

Social science gets a bad rap because not only does it sometimes make us confront some very ugly truths about human nature, its studies can be very difficult to reproduce, so much that the undertaking in doing just that found they couldn’t get the same results as more than half of the papers they tried to replicate. But ironically enough, an effort to replicate the replication did not succeed either. For those who are having trouble following this, let’s recap. Researchers trying to figure out how many social science papers can be reproduced didn’t conduct a study others were able to reproduce themselves. That’s a disaster on a meta level, but apparently, it’s more or less to be expected due to the subject matter, measurement biases, and flaws involved. In a study challenging the supposedly abysmal replication rate known as the Replication Project, it’s quickly evident that the guidelines by which the tested studies failed were simply too rigid, even going so far as to neglect clearly stated uncertainty and error margins, and choosing to perform some experiments using different methods than the papers they were trying to replicate.

Had the Replication Project simply followed the studies carefully and included the papers’ error bars when comparing the final results, it would have found over 70% of the replication attempts successful. That sounds not that great either, with more than one in four experiments not really panning out a second time, but that’s the wrong way to think about it. Social sciences are trying to measure very complicated things and they won’t get the same answer every time. There will be lots and lots of noise until we uncover a signal, and that’s really what science does. Where a quantification-minded curmudgeon sees failed replication attempts, a scientist sees failures that can be used as a lesson in what not to do when doing a future experimental design. It would’ve been great to see the much desired 92% successful replication rate the Replication Project set as the benchmark, but that number reduced the complexity of doing bleeding edge science that often needs to get it wrong before it gets it right, to the equivalent of answering questions on an unpleasantly thorough pop quiz. Add the facts that the project’s researchers refused to account for something as simple as error bars when rendering their final judgments, and that they would one in a while neglect to follow the designs they were testing, and it’s difficult to trust them.

Where does this leave us? Well, there is a replication problem in social sciences, so much that studies claiming to be able to measure it are themselves flawed and difficult to replicate. There are constant debates about which study got it right and which didn’t, and we can choose to see this as a huge problem we have to tackle to save the discipline. Or we can remember that this back and forth on how well certain studies hold up over time and whose paper got it wrong and whose got it right are exactly what we want to see in a healthy scientific field. The last thing we want is researchers not calling out studies they see as flawed because we’re trying to find how people think and societies work, not hit an ideal replication benchmark. It’s part of that asinine, self-destructive trend of quantifying the quality out of modern scientific papers by measuring a bunch of simple, irrelevant, or tangential metrics to determine the worth of the research being done and it really needs to stop. Look, we definitely want lots of papers we can replicate at the end of the day. But far more importantly than that, we want to see that researchers are giving it their best, most honest, most thorough try, and if they fail to prove something or we can’t easily replicate their findings, it could be even more important than a positive, repeatable result.

tornado

Nowadays, when severe weather strikes, the news immediately starts asking if global warming was responsible for the event they just covered, which is generally the wrong question to ask in the first place. Global warming itself is not going to trigger a particular storm system, rather it’s going to meddle with their frequency and severity depending on your regional climate because the world is a very big and complicated place, and a worldwide temperature rise of one degree will affect different places on Earth is different ways. This is what allows deniers to say that one glacier melting slower than another, or changing shape, means global warming isn’t happening because they should all melt, ignoring that their shape, location, and composition plays a huge role in how they will behave. So what should we expect to happen when we look at an event in one region of the world defined by a very particular kind of storm system: tornado outbreaks in the country where there are entire seasons during which they’re very likely to happen? There’s bound to be an uptick in how many tornadoes happen and how powerful they get, right?

Just like everything else in science, the answer isn’t quite cut and dry. While the typical number of outbreaks was roughly 20 per year over the last 60 years, the average number of tornadoes per outbreak rose by 50% as, interestingly enough, did the variance per outbreak. In short, we can’t find a change in the number of outbreaks and the ones that spawn fewer tornadoes grew less intense over more than half a century. However, the more intense ones have gotten really extreme, with far more tornadoes. Rather than increasing in number in a straight line, there are now extreme swings in how many tornadoes are born from storm system to storm system. It’s an interesting result though not a completely bizarre one. After all, tornadoes require a precise sequence of events to happen and North America is one of the few places where warm, moist air from the sea and cold, dry air from the Arctic can collide across vast swaths of land forming the powerful supercells that can spawn them, so if global warming is having any effect on them whatsoever, making tornado outbreaks more inconsistent with more energy being dumped into the typical regional weather patterns over decades is definitely not out of the question.

Since the research is limited to NOAA reports for the United States, it’s prudent to ask about an uptick in tornadoes in Canada, which also has a Tornado Alley, because a border isn’t going to suddenly stop a storm system to fill out customs forms and turn it away for lacking government issued identification to enter another nation. But there’s a bit of a controversy regarding if that’s happening because while on paper there are more tornadoes, scientists are hedging their bets by noting that they’re often happening in less populated regions and finally being spotted more often and detected more accurately so they’re not sure what the baseline was over the years. If those areas were much more heavily populated and more active, like they are in the U.S., they would have better tracking and would’ve been able to provide a more definitive answer. And all this brings us back to our original question of whether global warming is fueling tornadoes. The answer seems to be that it’s too early to tell, but over the last 60 years, the more violent swings in tornado outbreaks seem to point to it as a very plausible culprit. As always, the more data we have, the more complete our picture, but first impressions are that when weather turns violent, excess heat in our atmosphere can make an already bad storm even more extreme…

rage and fury

While my play-along-at-home AI project hit a little snag as I’m still experimenting with using the newest cross-platform version of a framework which might not be ready for prime time just yet, why don’t we take a look at the huge controversy surrounding the open journal PLOS ONE and why no matter how it all happened, the fact that it did is alarming? If you don’t know, the journal has been savaged on social media by scientists for publishing a Chinese study on the dexterity of the human hand with an explicit reference to God in the abstract. Some reactions have been so over the top that an ardent creationist watching form the sidelines could collect the outraged quotes and use them in a presentation on how scientists get reflexively incensed when anyone brings up God because they’re all evil atheists who can’t even bear to hear him invoked. But at its core, the scientists’ outrage has less to do with the content of the paper than it has with how badly broken the peer review mechanism is in a world of publish-every-day-or-perish, when the tenure committee that decides your fate scoffs at anything less than 100 papers in journals…

For what it’s worth, the paper’s writers say that they flubbed their translation into English and a reference to “the Creator” was really supposed to say evolutionary nature. I’m not sure if that’s true because while on the surface, China is an atheistic country, there are plenty of Christians who live there, and the rest of the paper’s English seems perfectly fine and proper. The capital reference seems too deliberate to just be there as a mistake, almost like someone deliberately snuck it in and the team is now covering for this investigator by faking sudden bouts of Engrish in a paper that doesn’t actually suffer from any. Obviously there’s no prohibitions for a scientist to be religious and conduct exemplary research. Francis Collins is a devout Evangelical whose free time was spent preaching Templeton’s gospel of accommodationism, but his work with the Human Genome Project is critical in modern biology. Ken Miller is a devoted Catholic, but he’s tirelessly kept creationism out of classrooms in the Midwest and separates his personal beliefs from his scientific work and advocacy. And that’s what all scientists of faith try to do, maintain a separation between religion and work in the public eye, and when they fail, an editor should be there to review the papers and point that out before publishing it for public consumption.

So that’s what the fuming on social media is all about: the lack of editorial oversight. Scientists who wanted to submit their research to PLOS ONE, or already have, are now worried that it will be considered a junk journal, and their time and effort publishing there will be wasted. Not only that, but they’re worried about the quality of papers they cited from the journal as well since an editorial failure during peer review means that outright fraud can go undetected and take huge professional risks by other scientists to uncover. Since peer review is supposed to keep a junk paper out of a good journal by pointing out every design flaw, obvious bias, cherry-picking, and inconsistencies that signal fraud or incompetence, and it’s the only mechanism that exists to do so before publication, any signs that the reviewers and editors are asleep at the wheel, or only going through the motions, is incredibly alarming to scientists. Yet, at the same time, I can sort of understand why this kind of thing happens. Reviewers are the gatekeepers of what qualifies as scientific literature and their job is to give scientists hell. But they’re not paid for it, their work for journals is not appreciated very much, and despite their crucial role in the scientific process, the fact of the matter is that they’re volunteers doing a thankless task out of a sense of duty.

While the popular perception of a scientist is that research is a cushy gig, the reality is that the majority of scientists are overworked, underpaid, and expected to hand out their time for free in the service of highly profitable journals that charge and arm and a leg to publish scientists’ own content. Any person, no matter how passionate or excited about his or her work, is not going to be extremely motivated and exceedingly thorough under these circumstances. Until we start to properly appreciate reviewers for their work and rewarding them for it, and until colleges finally realize that it’s dangerous and ridiculous to encourage scientists to write ten papers where one good one would’ve sufficed, mistakes like PLOS ONE’s are just going to keep happening as the review takes place in social media rather than by writers and editors, like it should’ve been. We can’t expect quality from peer review in the future if we’re not willing to make the task logistically reasonable and professionally appreciated, much like we shouldn’t expect to walk into any used car dealership and drive off in a brand new Ferrari for the price of an old Kia. Like with so many things in life, you get what you pay for when someone has to work for your benefit.

hot magnetar

Fast radio bursts, or FRBs, are quickly becoming one of the most interesting things out there in deep space and the more we study them, the more strange questions they raise. In less than a year, the media declared them to be alien broadcasts and a few days later, just random flukes, while actual scientists confirmed not only that they’re very real, but that they’re coming from as far as six billion light years away and shed light on matters of cosmological significance. But for all the new and ever more detailed observations, we still had little clue what’s causing them and my favorite theory involving some really extreme physics, might turn out to be flawed according to a new paper which finally has some hard data about the objects causing the FRBs. You see, any theory involving a cataclysmic event emitting one of these bursts means that the signal can come from a particular location only once because the object that created them was destroyed, but apparently, that’s not what we’re seeing. In fact, the same object can generate multiple and intermittent FRBs, meaning that despite their energy, their source is still very much there.

After studying a single burst called FRB 121102, astronomers around the world saw that it was repeating. There was no regular pattern, but it definitely recurred ten times according to what’s known as the dispersion measure: disruptions in the signal caused by its path through the dust and gas of space on its way to us, which, as recently mentioned, confirmed that we are able to weigh the universe correctly. Armed with the knowledge that the signal is repeating, the team’s focus then shifted on identifying what could create such powerful bursts and live to do it again, and then nine more times. Well, the researchers found the burst doing a survey of pulsars, still active neutron stars belching death beams and radio signals as they cool and settle down, and one particular type of neutron star seems to fit the bill as an FRB progenitor: a magnetar. It’s a neutron star with a magnetic field so powerful, that it could brick your electronics and erase the data on your credit cards from 120,000 miles away. The most powerful magnets ever built have less than a hundred millionth of that strength, and the planet’s magnetic field is a quintillionth of that. And when magnetars undergo a quake, we can feel it from 50,000 light years away.

Ultimately, the team thinks that FRBs are magnetic aftershocks of these magnetar quakes. The energy from the quake itself is too small for us to easily detect, but the powerful magnetic fields are disrupted enough to emit a scream across time and space when they reconnect. Consider that neutron stars are like an incredibly tightly packed coil with a mass of our sun crammed into a sphere 15 to 20 miles across, surface temperatures in the millions of degrees and an internal one soaring to over 1.8 billion at the core. The unites of measurement don’t even matter at this point because the numbers are just so huge. A quake that causes just a millimeter crack in the crust registers as a magnitude 23 on the Richer scale. The biggest possible natural earthquake can’t exceed a 9.2 and the scale itself is logarithmic, meaning that an almost invisible motion of magnetar’s surface can easily unleash 10 trillion times the energy our planet can at its worst. It seems like this stellar monster can definitely produce a burst that seems apocalyptic, then turn around and do it again with ease. As awesome as neutron star collapse theories of FRBs were, distant, quaking magnetars seem to be a much more solid candidate for their origins.

global warming sea rise

In his ongoing fight against the cognitive dissonance of working to downsize the very entity that pays his salary and NOAA’s research on global warming, Rep. Lamar Smith wants government scientists to turn over more and more papers that will supposedly prove that they’re faking their research for political gain, refusing to accept even the remote possibility that maybe there is at least a sliver of a fragment of a chance that they’re being completely honest and accurate. But this is what happens when a politician has every incentive to cry conspiracy and refuse to give experts the benefit of the doubt. Still, the first rule of a good public fishing expedition is to have at least a modicum of plausible deniability when called out on it and rather than back off even a little bit, in an effort to keep appeasing his most zealous anti-science constituents, Smith filed a list of keywords he wants NOAA to use when searching for documents he wants to see as part of his “investigation” which blatantly show him digging really hard for out of context “proof” of a conspiracy theory repeated almost daily in the echo cambers of conservative radio shows.

Basically, anything mentioning President Obama, the talks in Paris, ocean buoys, or the UN are to be turned over for quote-mining to create the next e-mail scandal because to Smith’s rabidly denialist supporters, the previous one was a smoking gun ignored by the media when in reality, the press obsessed over them for months and barely covered the hearings in which the e-mails were showed to be nothing more than cherry-picking for incriminating quotes. If anyone wanted to give Smith the benefit of the doubt, his latest demands are proof that he is only interested in partisan spectacle and conspiracy-mongering. Just like rabid creationists who cling to the same old, long debunked canards, global warming denialists will continue to regurgitate the same old cherry-picked charts, cite the same non-scandals of their own invention, and pretend that they haven’t been shown wrong by everyone and their grandma twice already. To them the issue of global warming is not a scientific one, it’s a sinister plot by the New World Order to strip them of their freedoms and property by the sinister global elite. Politicians like Smith are either cynically exploiting these hysterical fears, or falling for them. And I’m really not sure which is worse…

counting the days

Nowadays not only is there an app for what you want to do, but it can count how many times or how intensely you do it. It’s all part of the marketing pitch for the idea of The Quantified Self, an easy to follow, real-time analysis of your habits and patterns which should ideally help you be a better you with seemingly objective progress tracking showing how well you’re doing. However, does quantifying everything you do mean that you’re improving your stats at the expense of the happiness of doing the tasks being measured? Jordan Etkin from Duke University thinks it may after experimenting on 105 students and seeing them report getting less joy out of doing simple tasks that were being measured than just doing them. We know that enjoyment depends on an individual’s balance of eternal and intrinsic motivation to do something and the experiments set out to measure just how much knowing that one is being quantified affects the intrinsic rewards of doing something so we’d know how to determine that it’s important to quantify something but not destroy people’s motivation for performing the task in the first place when we do.

From a simple coding standpoint, it’s easy to record a data point to a persistent store. You can even do it behind the scenes in a way that doesn’t detract for an app’s typical functionality. But what are you going to do with that data? Why is it useful? If you can’t think of a reason why you should store it, the correct approach is to ignore it. In much the same way, Etkin measured the number of fish shapes colored by students, or the number of steps taken by them while paying the same token sum to the measured experimental group, and the free to do whatever control group performing the same tasks. In effect, she placed an additional burden on one group with quantification because the group coloring shapes had to click off when the shapes were drawn with every finished one, while the walking group had to check their pedometers. Normal, even fun activities have turned into, well, work. More shapes got drawn and more steps were taken, but enjoyment scores were lower. A follow-up experiment measuring reading in a work-oriented way and just for fun, saw the same pattern. When quantified, more done equals less enjoyed.

In some ways, this is common sense. That something intrinsically fun and turn it into something measured, analyzed, and dissected, and it’s a lot less appealing. This is why people with really, really serious cooking chops and talent may never want to become professional chefs because their outlet fro stress now becomes work and is tied to paying the bills. But if you give them the proper external incentive, like total creative freedom over the menu, or a high enough salary to quit their current jobs at a profit, their perspective may change. What Etkin did was to confirm a need for a motivation to measure something because if students who read more, or drew more, or walked more steps got bigger payments, the process would be a lot more fun since they get to look forward to being rewarded for the additional effort of measuring and logging data. Same as people trying to lose wight logging their calories and exercise, or factory workers moving as quickly as possible to crank out more widgets while getting paid on a per widget basis.

So don’t buy that FitBit because you’re curious about how many steps you take, buy it because you want to take 20% more steps than you usually do for a week and then reward yourself with something you wanted to buy when you hit that goal. And if you’re a manager and want to see an increase in your employees’ productivity, don’t just measure them and reprimand them if the numbers don’t hit your goal, give them something to look forward to, like a company lunch, or a night out, or adding free snacks to your office kitchen. Otherwise science shows that you’re not going to get much out of them, and considering the research on why people hate their jobs and want to quit, you’d actually be giving them a good reason to start calling recruiters and plotting their escape from your cube farm. Sure, Etkin’s study seems fairly obvious at first blush, but it’s downright maddening how many people don’t actually understand how to effectively quantify a task, especially in the workplace. Programmers are being asked all the time to track this or that simply because we could capture a data point. Maybe after reading about Etkin’s work, people making these requests will think twice about why they’re measuring what they are…

creepy manequins

Every psychology class mentions Stanley Milgram’s famous experiment to determine the limits of how far people could be pushed to execute horrific orders, and it’s since been the standard for today’s experiments measuring how to awaken our inner sociopath without interfering with your normal brain function. We already know that enough money will make you reconsider the natural human aversion of harming others, especially if you don’t actually have to see the pain you inflict firsthand. But what actually goes on in the brains of those who are following orders or inducements to hurt someone? Are they suffering some internal crisis when they harm others, are they simply pushing the button with no sense of agency on their own, or is something more complicated going on? To find out, European researchers repeated Milgram’s experiment with several important modern twists. They added buttons, a tone when a button was pressed, and read the electrical activity inside the participants’ brains when they were doing their part.

Now, Milgram’s inspiration for his research were the excuses of Nazis at Nuremberg defending themselves by saying that they were simply following orders so his tests focused on how orders are delivered and the subsequent reactions, so verbal commands were a key part of the setup. In this follow-up, how orders were delivered didn’t matter, just the fact that an order was issued so the researchers played a tone after participants pressed a button they were told to press. If the subjects were making conscious decisions and sticking to them, previous research said, the tone would seem to come notably faster after they pressed the buttons than if they were simply doing something on auto-pilot. We’re not sure why this happens, but accidental events seem to be processed slower than intentional ones, which is why gauging the subject’s subjective ideas about how quickly the tone came after they performed the requested or voluntary actions was a crucial part of the experiment. Some were free to choose to apply a small electric “shock” to an anonymous victim, take away £20 from him or her, or just press a button that did nothing as the control group. Others were simply told what buttons to push by the researchers.

What they found was quite interesting. First and foremost, the group told what to do reported a longer time between pressing the button and hearing the tone, exactly as expected. This meant that taking orders made them feel less in control of their actions, the brains evaluating what just happened as an involuntary action despite requiring their agency to be carried out. Secondly, a thorough analysis of their EEG patterns showed that they processed their decisions significantly less than the control group by analyzing activity known as event-related potential, or ERP, used to determine the cognitive load of an action in response of a stimulus. In other words, ordering someone to perform a task makes them feel as if they’re not actually the ones doing it and give the task and its consequences less thought. Revealingly, the topographical maps of the neural activity show areas where you’d find the prefrontal cortex, the seat of decision-making, showing the most activation in both groups while being a lot dimmer for the experimental participants to support this notion. As scary as it sounds, it seems that our brains might just be wired to follow orders with less thought and care than making our own choices. Why? We’ll need more studies to find out, but I’d bet it has to do with us evolving as a social species rather than loners.

head in sand

Here’s an extremely uncomfortable truth no one currently running for office in the U.S., or even remotely considering doing so ever wants to publicly admit. There are a lot of voters who really, really don’t like experts, scientists, or anyone well educated in anything other than medicine. In their eyes, any sign of intellectualism is not something to cheer or aspire to, to them it’s nothing more than pretension from someone they’re convinced thinks he or she is better than them and feels entitled to tell them what to do. At the same time, they’re extremely paranoid that they will have something valuable or important taken away from them to be given to all the undeserving moochers on lower socioeconomic rungs than they are, convinced that the American poor have already been living it up with free spending money, free food, and free world-class medical care for decades. So when a politician decides to cozy up to this constituency, his best bet is to start witch hunts against their most nightmarish moochers: government-funded scientists.

In his tenure as the chairman of the House Science and Technology Committee, a haven for a disturbing number of peddlers of anti-scientific twaddle, congressman Lamar Smith decided to do exactly that with his open-ended fishing expeditions into every possible aspect of scientists’ research in his quest to find some grand conspiracies to publicly squash for his science-averse, paranoid base’s delight. In his investigation of climate scientists working for NOAA, he specified absolutely no instances of misconduct he thinks occurred, only asked for ever more raw data to be provided to him, even though the data and the methods used to analyze it have been on the web for years, provided by NOAA to anyone even slightly curious. But data is not what Smith is really after, because he has no interest in the actual science. He and his donors are upset that updated data for atmospheric warming gathered from additional sources after years of looking over more and more observation stations eliminated the “pause” to which denialists cling. Since the only possibility in their minds is that the data is faked, they want evidence of fakery.

Really there’s no other way to put it. Smith wants to have private communications between the scientists funded by NOAA to create another Climategate, which denialists are still convinced is an actual scandal despite the scientists being cleared of any wrongdoing, and if he doesn’t find something badly worded when taken out of context, or something politically incorrect, he will be taking something he doesn’t understand — which is likely most of the things being discussed by climatologists — and is being paid by oil and gas lobbies to continue not understanding, way out of context and manufacture a scandal out of that. When the chairman of the science committee which decides on funding for countless basic research projects his nation needs to maintain the top spot for scientific innovation in the world thinks his job is to harass scientists he doesn’t like because his donors’ business may be adversely impacted by their findings, until some pretense to interrogate them comes up, no matter how flimsy, we have a very serious problem. While all abuses of power are bad, abuses by partisan dullards have a certain awfulness about them, as they ridicule when they seem to utterly lack the capacity to understand in the first place

math prodigy

According to overenthusiastic hacks at Wired, scientists have recently developed a way to scan your brain to predict just how intelligent someone is or how good you’ll be at certain tasks. This sounds like the beginning of a dystopian nightmare, rather than an actual field of research, that will end up with mandatory brain scans for everyone to “facilitate an appropriate job function” in some dark, gray lab in front of medical paper pushers, true. But it only sounds like this because the writer is more interested in page views than the actual study, which really has nothing to do with one’s intelligence but actually tested whether you could identify someone by scanning how this person’s brain is wired. Rather than trying to develop IQ tests in a box, the researchers put the theory that your brain wiring is so unique that getting a map of it could identify you every bit as well as a fingerprint, to the test. Not surprisingly, they found that a high quality fMRI scan of your brain at work performing some standard tests can definitely be used to identify you.

All right, that’s all fine and well, after all, the fMRI scan is basically giving you insight into unique personalities, and no two people’s brains will work the same way. But where exactly would this whole thing about measuring intelligence come into play? Well, the concept of fluid intelligence, mentioned only three times in the study, was brought up as an additional avenue of research in light of the findings and revolves around the idea that certain parts of the brain having a strong connection will make you notably better at making inferences to solve new problems. Unlike its counterpart, crystallized intelligence (called Gc in neuroscience), fluid intelligence (or Gf) is not what you know, but how well you see patterns and come up with ideas. Most IQ tests today are heavily focused on Gf because it’s seen as a better measure of intelligence and the elaboration on what exactly the fingerprinting study had to do with predicting Gf was an extended citation of a study from 2012 which found a link between the lateral prefrontal cortex’s wiring to the rest of the brain and performance standardized on tests designed to measure Gf in 94 people.

Here’s the catch though. Even though how well your lateral prefrontal cortex talks to the rest of your brain does account for some differences in intelligence, much like your brain size, it really only explains 5% of these differences. Current theory holds that because your prefrontal cortex functions as your command and control center, what Freud described as the ego, a strong link between it and several other important parts of the brain will keep you on task and allow you to problem-solve more efficiently. Like a general commanding his troops, it makes sure that every other relevant part of your mind is fully engaged with the mission. But even if that theory is right and your preforntal cortex is well wired in a larger than median brain, close to 90% of what you would score on an IQ test can come down to level of education and other factors that generally make household income and education a better predictor of IQ scores than biology. Although in many ways it’s not that accurate either because style of learning and culture also play a role. All we can conclude is that the interplay between Gf, Gc, and education is very complex.

We should also take note of one study of popular theories of biological contributors to Gf which spanned 44,600 people and found no evidence that a combination of fMRI maps has predictive power when it comes to IQ points. In other words, we have a lot of ideas that seem plausible as to the biological origins of intelligence, but because our brains are very plastic, we are not all on a level playing field when it comes to the amount and quality of education we receive, and even our longest-running efforts for accurate Gc assessments have shown that we’re really bad at it, studies that claim predictive powers when it comes to our IQs using brain scans of 100 college students or fewer are extremely likely overselling their results. Not only that, but even when the studies do actively oversell, they still claim to explain only a tiny fraction of the score differences because they recognize how small and homogeneous their data sets really are. Not only do we not have an fMRI based tests for intelligence, we’re not even sure it’s possible. But those facts bring in far, far fewer page views than invoking kafkaesque sci-fi lore in a pop sci post…

eye of providence scroll

For as long as there have been conspiracy theories, there have been explanations for why the vast community of people who hang on conspiracy theorists’ every word exist. Some might just be paranoid in general. Others may be exercising their hatred or suspicion of a particular group of people, be they an ethnic group or a political affiliation. Others might just want to sound as if they’re smarter and more incisive than everyone else. Others still seek money and attention in their pursuit of a stable career of preaching to the tinfoil choir. But that doesn’t answer the really big question about the constant popularity of conspiracy theories throughout the ages. Is there something specific about how the believers are wired that makes the more prone to believe? Is ascribing to 9/11 Trutherism, or fearing Agenda 21, or looking for alien ancestry in one’s blood actually a case of a brain generally seeing patterns in randomness and conspiracy theories are just an outlet waiting to tap into this condition? Swiss and French researchers recently decided to try and answer that question by experimenting on college students and the public.

First, they evaluated whether their test subjects would detect patterns in truly random coin flips and doctored ones, with and without priming them. Then, they would ask political questions to measure the degree of conspiratorial thinking and level of belief in popular theories such as the notion that the Moon landing was faked or 9/11 was an inside job of some sort. Obviously, they found that they more conspiratorial view of politics the subjects took, they more likely they were to be Moon hoaxers and 9/11 Truthers, but paradoxically, that had absolutely no reflection on if they claimed to see human interference in random patterns of coin flips or identify sequences a researcher manipulated, priming or no priming. In other words, in everyday, low level tasks, the mind of a conspiracy theorist doesn’t see more patterns in randomness. As the researchers put it themselves, for a group of people who like to say that nothing happens by accident, they sure don’t think twice if something apolitical and mundane has been randomly arranged.

What does this finding mean in the grand scheme of things? Well, for one it means that there’s really no one type of person just wired for conspiratorial thinking or whose brain wiring plays an important role in ascribing to conspiracy theories. Instead, it’s more likely that all these theories are extreme manifestations of certain political beliefs or personal fears and dislikes, so the best predictor of being part of the tinfoil crowd is political affiliation. It’s not too terribly surprising if we consider that most climate change denialists who fear some sort of implementation of a sinister version of Agenda 21 they imagined exists are on the far right, while those terrified of anything involving global vaccination or commercial agreements are on the far left. And while there are a few popular conspiracy theories that overlap because people are complex and can hold many, many views even if they are contradictory, you can separate most of the common theories into ones favored by conservatives and ones favored by liberals. And as for what biology is involved in that, well, that’s been a minefield of controversy and statistical maelstroms for a long time…