Archives For science

babel fish

When finding yourself in a debate with a partisan ideologue who claims that all higher education is simply anti-American socialist brainwashing, he will often bring up that Noam Chomsky is one of the most cited scholars in the world despite his penchant for left wing radical conspiracies he adamantly supports in his books. However, the reason why Chomsky is cited so often has zilch to do with his politics and everything to do with his study of language, particularly his theory of a universal grammar. According to his work, all human languages share common patterns which we can use to create universal translators and pinpoint the semantic details of each word with a proper context. This idea is particularly popular in the field of computer science, particularly in a number of AI experiments because it can give us algorithms for symbol grounding, a fancy term for deciding exactly what a word is supposed to represent in a given situation. This is one of the fundamental leaps needed to make for machines to truly understand what humans say.

Of course, as with any theory with the word universal in the title, there’s plenty of criticism about how universal it actually is, and some escalated into a full blown feud among linguists. Critics of the theory have went as far as to say that that universal grammar is whatever Chomsky wants it to be when it’s being debated, which in academia is actually a pretty vicious burn. But it’s rather expected since a theory that claims to apply to every language on the planet can be challenged with a single example that fails to conform to it, no matter how obscure. Considering that we not only have to consider modern languages, but the evolution of all known languages to make the theory airtight, there’s still a lot to flesh out in Chomsky’s defining work. Working with all modern languages is hard enough, but working with historical ones is even more challenging because a majority of modern human history was not recorded, and the majority of what has been is pretty sparse. I’d wager that 95% of all languages ever created are likely to be lost to time.

Even worse than that is knowing our languages change so much that their historical origins can be totally obscured with enough time. While the first physiologically modern humans evolved in North Africa some 100,000 years ago, a comparative analysis of today’s language patterns just doesn’t show any founder effect, meaning that if one of our first ancestors stumbled into a time machine and traveled to today, she would not be able to understand even a single sound out of our mouths without instruction from us. Research like this has led many linguists to believe that language is shaped by culture and history more than just the raw wiring of our brains as per the universal grammar theory. Others, disagree producing papers such as the recent MIT study of logical patterns in 37 languages showing that all of the languages prefer very similar rules when it comes to their grammatical style, meaning that the underlying logic had to be the same, even when comparing Ancient Greek to modern languages as different as English and Chinese.

By analyzing how closely related concepts cluster in sentences across all the languages chosen for the project, researchers found that all of them prefer to keep related concepts close to each other in what they considered a proper, grammatically correct sentence. To use the example in the study, in the sentence “John threw the trash out,” the domestic hero of our story was tied to his action and the villainous refuse was tied to where it was thrown. These concepts weren’t on the opposite sides of a sentence or at a random distance from each other. This is what’s known as dependency length minimization, or DLM, in linguist-speak. One of the few undisputed rules of universal grammar is that in every language, the core concepts’ DLM should be lower than a random baseline, and this study pretty solidly showed that they weren’t. In fact, every language seemed to have an extremely similar DLM measure to the others, seemingly proving one of the key rules of universal grammar. So where exactly does that leave the theory’s critics?

Well, as said before, calling any theory universal is fraught with problems and leaves it open to the most minor, nit-picking criticism, and we all know of exactly one society based around logic, and that’s the Vulcans from Star Trek. To dispute the theory, linguists had to go out of their way to tribes so vaguely aware of the modern world, we may as well be from another planet to them, and look for the smallest cultural inconsistencies that conflict with the current interpretation of a theory they say is somewhat vague. Certainly they could produce a language that eschews the rules of universal grammar in favor of tradition and religion, and maybe Chomsky can just tone his theory’s presumptuous name down a bit and accept that his work can’t apply to every single language humans have ever used or will invent in the future. But in the end, universal grammar does seem to appear extremely useful and shows that logic plays the most important part of all languages’ initial structures. We might not be able to use the theory to build perpetual universal translators, but we could come quite close since the required patterns exist as predicted.

[ illustration of a Babel Fish by John Matrz ]

futurama takeoff

Far be it from me to claim physic powers, because those aren’t real, but the moment the news of weird results coming from experiments to test the EmDrive came to my attention, I knew that one day I’d have to write a post about it. Not sure whether to jump on the bandwagon to simply join the chorus of voices explaining that it was impossible, I waited until proper experiments will show that the minuscule thrust being recorded in earlier tests was within the margins of error, a little bit of interesting noise but nothing beyond that to prove my premonition wrong. But as odd as it sounds, the EmDrive is still being tested and showing faint signs of life, and getting a whole lot of press claiming we’re on the verge of building a warp drive. And so, it’s time to quit stalling, roll up my sleeves and explain why the EmDrive can show us some interesting physics in weird environments, but simply would not work as a viable spacecraft engine as it was planned.

Getting right to the point, the biggest concern with the EmDrive is that it’s yet another version of a reactionless drive proposed by those who thought they spied something that isn’t there when looking at general relativity and tortured complex equations until they seemed to say what they wanted them to say. But such devices are impossible because they violate fundamental laws of physics we know to be true after centuries of observation and study. Objects at rest stay at rest until energy is added to the system and causes other objects to act on them. That’s what we’re taught in our very first physics class as one of the fundamental laws governing motion. When a device like the EmDrive comes along, it asks us to throw out this law and believe that whatever is going on inside the object can act as an external force large enough to make it move without actually adding energy to the mix. How that happens is usually peppered with tortured ret-cons of general relativity and buzzwords about group motion, frequencies, and reference frames.

Basically, think of piloting spacecraft with EmDrives as trying to make sailboats in a vacuum go simply by blowing into the sails. Sure, they’ll react a little at first as you introduce the initial tidbit of new energy, but in a closed system, the air you blow out of your lungs will simply dissipate as the system reaches equilibrium and all motion will stop fairly quickly. Same with the EmDrive. It seems that bouncing microwaves do produce some odd effects as they collide in the resonant chamber, but in a closed system, in which it has never actually been tested by the way, this too will dissipate and reach equilibrium so even the infinitesimal thrust currently being detected will be gone. Tellingly, the experiments on the versions of the EmDrive that seemed to be the most promising deviate in principle from the original design by including a nozzle to expel photons in the chamber as the reaction takes place, while the original was just supposed to propel craft by resonating away with no propellant or thruster like an alien warp drive in a sci-fi movie.

In the end, we’re left with pop sci blogs and news telling us the the EmDrive works while citing a few possibly intriguing experiments with a very inefficient Q-thruster design that departs from a core principle of the EmDrive’s planned implementation. That’s how it works. It’s not breaking a fundamental law of physics, it’s trying to resonate well known, but still rather poorly understood quantum particles that pop in and out of existence from the fabric of space and time. It’s a cool concept and not out of the realm of plausibility, but it’s very unclear whether it could actually be used as a real spacecraft engine and it’s not a reactionless drive we are being told it is by pretty much all of the media. That’s what a small skunk woks lab at NASA actually tested just to see if the concept was plausible, not the “impossible drive that violates the laws of physics,” and while it might not really go anywhere and seems rather buggy and hard to definitively verify today, it’s still a pretty interesting way to if we can actually do anything with zero point energy.

pop culture aliens

If you don’t remember Chandra Wickramasinghe, here’s a quick refresher. Back in the day, the scientist worked with Fred Hoyle, the brilliant astronomer whose really poorly supported notions about the origins of life inspired many a creationist, and led him and a few of his colleagues on a hunt for evidence of panspermia, the idea that life originated somewhere in deep space and as our planet was finally settling down after its turbulent infancy, it settled here and evolved into all the species we know, and numerous ones we don’t. On the face of it, it’s not an inherently bad, or even wrong idea. It has actually been around since Darwin started wondering about the very same questions, and despite being occasionally criticized, it’s still popular in astrobiology. There does appear to be plenty of interesting evidence in favor of at least some building blocks of life coming form space, especially from asteroids and comets. This is why finding complex organic structures in the carbon layer of 67P wasn’t a surprise at all. In fact it was widely expected.

Yet according to Wickramasinghe, it’s proof that comet 67P is actually teeming with life and the scientific community at large needs to step up and announce that we found aliens. Despite how generously he’s treated by The Guardian’s staff writer however, he’s not a top scientist and his claim to expertise in astrobiology comes from declaring pretty much every newsworthy event in any way related to viral and microbial life as undeniable proof of aliens. He’s done this with mad cow, polio outbreaks, SARS, AIDS, and one of his fans recently declared that Ebola could have come from outer space. His proof of all this? Pretty much none. What papers he published to at least clear up how he thought life actually got its start and how it can travel across billions upon billions of light years so easily were in a vanity journal which was basically mocked into shutting down after failing to include a single entry of real scientific merit, and are absolutely inane. Hey, personally, I’m a huge fan of the panspermia hypothesis myself, but even in my very generous approach to reviewing astrobiology papers, what Wickramasinghe produced was absurd.

But of course, as all cranks eventually do, Wickramasinghe cried conspiracy after his work was battered by other scientists, declaring that astrobiology was a discipline under assault from the conservative geocentric cabal made up of old scientists hell bent on shutting down research on possible alien life forms in the wild. This came as a surprise to the flourishing researchers who had been studying extremophiles, theoretical alien biochemistry, and discovering more proof of organic molecules and water floating in space. You see, astrobiology is doing great and keeps advancing every day. Wickramasinghe, on the other hand, is not doing well because he doesn’t actually conduct any rigorous scientific experiments while desperately aspiring to be the person who goes into the history books as the scientist who discovered alien life. His constant attempts to stay in the media spotlight with his out-of-left-field proclamations and conspiracy theories are the typical self-serving machinations of a vain elder past his prime jealous that someone else is going to do what he aspired to accomplish. Honestly, it’s a sad way to end one’s career, to just chase after those doing the real work with outlandish soundbites and wallowing in self-pity.

black hole accretion disk

Falling into a black hole is a confusing and complicated business, rife with paradoxes and weird quantum effects to reconcile. About a month ago, we looked at black holes’ interactions with the outside world when something falls into them, and today, we’re going to look into the other side of the fall. Conventional wisdom holds that inside a black hole gravity is exponentially increasing until time, space, and energy as we know it completely break down as the singularity. Notice I’m not talking about matter at all because at such tremendous gravitational forces and with searing temperatures in the trillions of degrees, matter simply can’t exist anymore. Movies imagine that singularity as some sort of mysterious portal where anything can happen, while in reality, we’re clueless about what it looks like or even if it really exists. We don’t even know if anything makes it down to the singularity in the first place. But what we do know is that somewhere, whatever is swallowed by the black hole should persist in some weird quantum state because we don’t see any evidence for black holes violating the first law of thermodynamics. Enter the fuzzball.

Quantum fuzzballs aren’t really objects or boundary layers as we know them. Instead, they’re a tangle of quarks and gluons made up of the matter that gave rise to the black hole and what it’s been eating over its lifetime. They don’t have singularities, just loops of raw energy trapped by the immense gravitational forces exerted on them. On the one hand, thinking of a black hole as just a hyper-dense fuzzball eliminates the anomalies and paradoxes inherent in descriptions of singularities, but on the other, simply making a problem go away with equations doesn’t mean it was solved. And that’s the real problem with quantum fuzzballs. They appear as exotic math in general relativity being extended deep into a realm where its predictive powers begin to fail, so while it’s entirely possible that we identified in what direction we need to explore and what we’d expect were we to look into a black hole, it’s equally likely that the classic idea of their anatomy still holds. Unless we drop something into one of those gravitational zombies nearby, we won’t know if the current toy models of what lies inside of it are right. All we have is conjecture.

experimental plant

Several years ago, scientists at the sustainable farming research center Rothamstead decided to splice a gene from peppermint into wheat to help ward off aphid infestations. You see, when hungry adult aphids decide it’s time for a snack, the essential oil given off by peppermint mimics a danger signal for the insect. Imagine trying to bite into your sandwich just as a fire alarm goes off over your head with no end in sight. That’s exactly what happens to aphids, and the thought was that this ability could be spliced into wheat to reduce pesticide use while increasing yield. It should also be noted that Rothamstead is non-profit, the research initiative was its own and no commercial venture was involved in any way, shape or form. Sadly, the test crops failed to live up to their expectations and deter aphids with the pheromone they produced, EβF. Another big, important note here is that despite the scary name, this is a naturally occurring pheromone you will find in the peppermint oil recommended by virtually every organic grower out there.

Of course, noting the minor nature of the genetic modification involved, the total lack of a profit motive on the part of a highly respected research facility, the sustainability-driven thinking which motivated the experiment, and the fact that the desired aphid repellent was derived from a very well known, natural source, anti-GMO activists decided that they wanted to destroy test crops in more mature stages of the research anyway because GMOs are bad. No, that was the excuse. Scientists planting GMO plants? They obviously want to kill people to put money in Monsanto’s pockets with evil Frankenfoods. With the experiment failing, they’re probably celebrating that all those farmers trying to protect their wheat lost a potential means of doing so and they won’t be driving to the research plots in the middle of the night to set everything on fire. The group which planned to carry out this vandalism, like many other anti-GMO organizations, lacks any solid or scientifically valid reason to fear these crops, and was acting based solely on its paranoia.

Indeed, anti-GMO activism is basically the climate change denial of the left. It revolves around a fear of change and bases itself on fear-mongering and repeating the same debunked assertion after another ad nauseam, with no interest in debate and even less in actually getting educated about the topic at hand. While anti-GMO zealots rush to condemn any Big Ag study showing no identifiable issues with GMO consumption on any criticism they can manage, real or imagined, with no study ever being good enough, they cling to horrifically bad papers created by scientists specifically trying to pander to their fears, who threaten to proactively sue any critics who might ruin the launch party for their anti-GMO polemics. Had Big Ag scientists done anything remotely like that, the very same people singing praises to Séralini would have demanded their heads on the chopping block. Hell, they only need to know they work in the industry to declare them parts of a genocidal New World Order conspiracy. But you see, because these activists are driven by fear and paranoia, to them it’s ok to sabotage the safety experiments they demanded to assure that scientists can’t do their research, while praising junk pseudoscience meant to bilk them.

alpha centauri bb

Carbon is a great element for kick-starting life thanks to its uncanny ability to form reactive, but still stable molecules perfect for creating proteins, amino acids, and even the backbone of DNA and RNA, or their functional equivalents. And yet, according to those who argue that the reason we exist is that the universe is somehow fine-tuned for us, or that life exists as a random, one in a trillion chance, it shouldn’t even be here. You see, when the first stars started fusing hydrogen into helium-4 deep in their searing cores, the resulting helium atoms should have combined into beryllium-8 which decays so quickly that there should have been virtually no chance for another helium atom to combine with it to form carbon-12, which accounts for 98.9% of all carbon in the known universe and makes life possible. According to astronomer Fred Hoyle, whose misuse of the anthropic principle has been used to justify many an anti-evolutionary screed, since carbon based life exists, there must be a mechanism by which this beryllium bottleneck is resolved and the clue to this mechanism must lie in the conditions under which the star fuses helium.

You see, when atoms fuse into a new element, the newly formed nucleus has to be at one of its natural, stable energy levels, otherwise the combination of the protons’ and neutrons’ energies, as well as the energy of their kinetic motion will prevent the fusion. Hoyle’s insight was than any new carbon atom must have had a resonance with the process by which a beryllium and helium atom would combine, which would exert just enough energy to slow down the decay rate for the reaction with a passing helium-4 atom to happen, so the natural energy level of the result would sustain a stable carbon-12 nucleus. Imagine rolling magnetic spheres down a hill, and as these magnets roll, they collide. Some will hit each other with just enough energy to keep rolling as a single unit and absorb new spheres they run into, others combine, then break apart, or just roll on their own. The angle, the force of impact, and the speed and masses of the spheres all have to be right for them to join, and when they do, they’ll have to stay that way long enough to settle down. This is quantum resonance in a nutshell, and it’s what made carbon-12 possible.

But while this is all well and good, especially for us carbon based lifeforms, where does Hoyle’s discovery leave us in regards to the question of whether the universe was fine-tuned for life? If we assume that only carbon based life is possible, and that the only life that could exist is what exists today, the argument makes sense. However those assumptions don’t. Even if there was no quantum resonance between helium-4, beryllium-8, and carbon-12 in the earliest stars from which the first atoms of organic molecules were spawned, the first stars were massive and it’s a reasonable guess that when they went supernova, they would have created carbon, silicon, and metals like aluminium and titanium. All four elements can be useful in creating molecules which can form the chemical backbones of living organisms. In fact, it’s entirely possible that we could one day find alien life based on silicon and that in some corner of the galaxy there are microbes with genomes wound around a titanium scaffold. Life does not have to exist as we know it, and only as we know it. We didn’t have to exist either, it’s just lucky for us that we did.

When creationists try to come up with the probability that life exactly the way we understand, or have at least observed to exist, came out the way it has, against all other probabilities, they are bound to get ridiculous odds against us being here. But what they’re really doing is calculating a probability of a reaction for reaction, mutation for mutation, event for event, repeat of the entire history of life on Earth, all 4 billion years of it, based on the self-absorbed and faulty assumption that because we’re here, there must a reason why that’s the case. The idea that there’s no real predisposition towards modern humans evolving in North Africa, or that life could exist if there’s no abundant carbon-12 to help bind its molecules is just something they cannot accept because the notion that our universe created us by accident and we can be gone in the blink of a cosmic eye to be replaced by something unlike ourselves in every way, is just too scary for them. They simply don’t know how to deal with not feeling like they are somehow special or that nature isn’t really interested in whether they exist or not, just like it hadn’t for at least 13.8 billion years…

paper crowd

Amazon’s Mechanical Turk lets you assign menial, yet attention-intensive tasks to actual human beings, despite the name’s ambiguity, and those humans want to be paid consistently and a fair fee for their efforts. This is why in March of last year, they launched the Dynamo platformwhich allows them to warn each other of bad clients who were stingy or unreasonable. The brainchild of Stanford PhD student Niloufar Salehi, who wanted to study digital labor rights, it came about in large part due to many of those stingy, unfair clients being academics. With small budgets for surveys and preparing complex machine learning algorithms, researchers were often paying an insultingly token sum to the workers they recruited, something Dynamo argues hurts the quality of their research by limiting their labor pool to the truly desperate and ill-qualified in its rules and guidelines for ethical academic requests for inquiring researchers looking for assistance.

It’s hard to know what’s worse, the fact that we give so little funding to researchers they have to rely on strangers willing to work for scraps, or that academics are fine with the notion of paying the equivalent of prison odd job wages to their remote assistants. Part of the problem is that the issues are interdependent. Many academics can’t afford to pay more and still meet their targets for sufficient survey responses or machine learning algorithms’ training set sizes. Turkers most qualified for the job can’t afford to accept less than 10 cents a minute, which doesn’t sound like much, until you realize that 15,000 units of work taking half an hour come out to $45,000 or so, a hefty chunk of many grad students’ budgets. Something’s gotta give and without more money from universities and states, which is highly unlikely, academics will either keep underpaying the crowds they recruit, or end up doing less ambitious research, if not less research in general…

happy alarm

In a quote frequently attributed to John Lennon a boy was asked what he wanted to be wanted to be when he grew up and he replied that he wanted to be happy. He was then told that he did not understand the question, to which he retorted that the person asking him didn’t understand life. And he’s right, we all want to be happy. That’s especially true at work, where most of us will spend nearly a third of our waking hours and we’ll deal with countless stresses big and small on a daily basis, seemingly for nothing more than a paycheck. Work should be interesting, give us some sense of worth and purpose, but 70% of all workers are apathetic about, or outright hate their jobs, which clearly means whatever your bosses are doing to make you happy simply isn’t working. Though I’m sort of making a big assumption that your bosses are even trying to make you happy, much less care that you exist, or that they need to worry about whether you like the job they have you doing. And that, objectively, is perhaps the most worrisome part of it all…

You see, social scientists and doctors have long figured out what makes you happy, why it is in the interest of every company’s bottom line to keep employees happy, and how your perpetual case of the Mondays could be eliminated, or at least severely reduced. Most American workers, as we can see from the statistics, are dealing with the stress of being at a job they dislike, which increases their levels of cortisol, a stress hormone that hardens arteries and increases the odds of having a heart attack. If they’re not there yet, the prolonged stress also causes a host of very unpleasant issues like irregular sleep, disordered eating, anxiety, and depression. In fact, close to a quarter of the American workforce is depressed, which is estimated to cost over $23 billion per year in lost productivity. We also know exactly why people hate their jobs, and unlike many business owners think, it has nothing to do with employees being greedy and lazy, it’s usually a terrible management policy, and feeling as if they’re utterly disposable and irrelevant.

People who are unemployed for a year or more are almost as likely to be depressed as working stiffs and their odds of being diagnosed with depression go up by nearly 2% for every time they double their time out of work. So while a bad job can make people miserable, not having one is every bit as bad if not worse. And these are just the numbers for one year of unemployment, so what lies beyond that could be far scarier since every trend shows mental health suffers without work or purpose, and physical health quickly deteriorates as well. This leaves us stuck in an odd dilemma. We know that people need to, and want to work, and we know full well that when they hate their jobs, their performance lags, as does their health, forming a vicious cycle of bad work and disengagement contributing to poor health, worse work, and more disaffection on the job. It seems obvious that something should be done to address this, for the last 15 years, there has been no change in the stats. Why? The short answer? Terrible management.

One of this blog’s earliest posts explored experiments in which scientists confirmed that often, a group chooses a leader based on little more than bravado, overlooking the results. In follow-up experiments, we even saw mathematical evidence that companies would be better off randomly assigning their managers instead of promoting them the way they do now. Managers also tend to think they’re a lot better than they actually are, while in reality, half the workforce put in a two week notice specifically because of their bosses, and despite often giving themselves very high praise, managers are almost as disengaged as their employees, with 65% of them simply going through the motions of another day. Go back to the most frequent reasons why people are not happy at work. Half of them are about being micromanaged, left in the dark, and treated like a disposable widget rather than a person. They’re primed to see themselves are less valuable, if not useless, and we know that negative priming leads to terrible performance. Tell people they should just be lucky you don’t fire them, and you’ve effectively set them up for failure.

Think about your own worst bosses. They never hesitated to tell you that you were wrong, or to look down on you, or watch over your shoulder because they had no trust in you and turned any inevitable slip-up or small error, even if you immediately caught and corrected it, into some new justification for watching you like a hawk, right? Or if not, did they simply never talk to you about anything, merely dropped off more work and expected you to be done silently? Combine those daily putdowns with a constant threat of being outsourced simply to save a dollar, being shoved to an open office where you have no personal space or privacy and have constant distractions, on top of a lack of any career progression path in sight, and tell me that’s a job even those who live to work would find engaging. As many organizations grow, managers disassociate from the people they are managing, seeing them as little more than numbers on a spreadsheet because that’s what they are in their daily list of things to do. This breeds disengagement, which breeds frustration, and which causes talented employees to run away for greener pastures.

Keeping one’s employees happy should not be one of those HBR think pieces that makes your executive team “ooh” and “ahh” in a meeting where you run through PowerPoint slides showing how much money you’re losing to turnover, depression, and bad management. It should be the top priority of middle managers and supervisors because happy employees work harder, show loyalty and dedication, and help recruit more good talent. Yes, spending on benefits like catered lunches, or gym memberships, or better healthcare, or easy access to daycare, or flexible time off policies sounds exorbitant, I know, and many businesses can’t afford all of that. But showing employees that you care, that you listen to them, and treating them with respect pays off as the engaged employees become more productive and dedicated. In a knowledge economy there’s no excuse for the employee-employer relationship be much like one between a master and the indentured servant. It should be a business partnership with benefits for both parties extending well beyond “here’s your paycheck, now get to work.” The science says so, and besides, when you’re a manager, isn’t keeping employees motivated and productive your top priority?

rainbow flag splash

Last year, a study conducted by poly sci grad student Michael LaCour showed that just a simple conversation with a canvasser who talked to people about marriage equality and then identified as gay, was enough to sway minds towards the acceptance of same sex marriage. This was an odd result because people don’t tend to change their views on things like homosexuality after a brief conversation with a stranger, no matter how polite the stranger was. However, the data in the paper was very convincing and it may have been entirely possible that the people surveyed didn’t think about marriage equality and meeting a gay person who didn’t fit the toxic stereotype propagated by the far right, wanted to seem supportive to meet social expectations, or might’ve even been swayed off the fence towards equality. After all, the data was there, and it looked so convincing and perfect. In fact it looked a little too perfect, particularly when it came to just how many people seemed open to talking to strangers who randomly showed up at their doors, and how inhumanly consistent their voiced opinions have been over time. It was just… off.

When doing a social sciences experiment, the biggest stumbling block is the response rate and how small it usually is. Back in my undergrad days, I remember freezing my tail end off trying to gather some responses for a survey on urban development in the middle of an Ohio winter and collecting just ten useful responses in three hours. But LaCour was armed with money and was able to pay up to $100 for each respondent’s time unlike me, so he was able to enroll 10,000 or so people with a 12% response rate. Which is a problem because his budget would have had to have been over $1 million, which was a lot more than he had, and a 12% rate on the first try will not happen. Attempts to replicate it yielded less than a 1% response rate even when there was money involved. Slowly but surely, as another researcher and his suspicious colleagues looked deeper, signs of fraud mounted until the conclusion was inescapable. The data was a sham. Its stability and integrity looked so fantastically sound because no study was actually done.

New York Magazine has the details on how exactly the study came undone, and some parts of the story, held up in the comments as supposed proof of universities’ supposed grand Marxist-homosexual conspiracy to turn education into anti-capitalist and pro-gay propaganda as one is bound to expect, actually shine a light into why it took so long for the fraud to be discovered. It’s easy to just declare that researchers didn’t look at the study too closely because they wanted it to be true, that finding some empirical proof that sitting a homophobe down with a well dressed and successful gay person for half an hour would solve social ills was so tempting to accept, no one wanted to question it. Easy, but wrong. If you’ve ever spent time with academics or tried to become one in grad school, you’d know that the reason why it took exceptional tenacity to track down and expose LaCour’s fraud is because scientists, by in large, are no longer paid to check, review, and replicate others’ work. Their incentive is to generate new papers and secure grants to pay for their labs and administrators’ often outrageous salaries, and that’s it.

Scientists have always lived by the paradigm of “publish or perish,” the idea that if you publish a constant stream of quality work in good journals, your career continues, and once you stop, you are no longer relevant or necessary, and should quit. But nowadays, the pressure to publish to get tenure and secure grants is so strong that the number of papers on which you have a byline more or less seals your future. Forget doing five or six good papers a year, no one really cares how good they were unless they’re Nobel Prize worthy, you’re now expected to have a hundred publications or more when you’re being considered for tenure. Quality has lost to quantity. It’s a one of the big reasons why I decided not to pursue a PhD despite having the grades and more than enough desire to do research. When my only incentives would be to churn out volume and try to hit up DARPA or the USAF for grant money against another 800 voices as loud and every bit as desperate to keep their jobs as mine, how could I possibly focus on quality and do bigger and more ambitious projects based on my own work and current literature?

And this is not limited to engineering and hard sciences. Social science has the same problems as well. Peer review is done on a volunteer basis, papers can coast through without any critical oversight, fraud can go unnoticed and fester for years, and all academic administrators want to do is to keep pushing scientists to churn out more papers at a faster and faster rate. Scientists are moving so quickly, they’re breaking things and should they decide to slow down and fix one of the things that’s been broken, they get denied tenure and tossed aside. Likewise, whose who bring in attention and money, and whose research gets into top tier journals no matter how, get a lot of political pull, and fact checking their research not only interferes with the designated job of cranking out new papers in bulk, it also draws ire from the star scientists in question and their benefactors in the administration, which can cost the fact checkers’ their careers. You could not build a better environment to bury fraud than today’s research institutions unless you started to normalize bribes and political lobbyists commissioning studies to back their agendas.

So scientists didn’t check LaCour’s work not because they wanted to root for gay marriage with all their hearts as they were brainwashed by some radical leftist cabal in the 1960s, they didn’t check his work because their employers give them every possible incentive not to unless they’ll stumble into it when working with the same exact questions, which is actually what happened in Broockman’s case when he stumbled on evidence of fraud. And what makes this case so very, very harmful is that I doubt that LaCour is such a staunch supporter of gay rights to commit the fraud he has in the name of marriage and social equality. He just wanted to secure his job and did it by any means he thought necessary. Did he give any thought how his dishonesty impacts the world outside of academia? Unlikely. How one’s work affects the people outside one’s ivory tower is very important, especially nowadays when scientists are seen as odd, not quite human creatures isolated from everyday reality by an alarming majority of those exposed to their work, and who will be faulted for their colleagues’ shortcomings or dishonesty en masse.

Now, scientists are well aware of the problem I’ve been detailing, and there is a lot of talk about some sort of post-publication peer review, or even making peer review compensated work, not just something done by volunteers in their spare time with the express purpose of weeding out bad papers and fraud. But that’s like trying to cure cancer by treating just the metastatic tumors rather than with aggressive ressection and chemotherapy. Instead of measuring the volume of papers a scientist has published, we need to develop metrics for quality. How many labs found the same results? How much new research sprang from these findings based not only on direct citation count, but citations of research which cite the original work? We need to reward not the ability to write a lot of papers but ambition, scale, and accuracy. When scientists know that a big project and a lot of follow up work confirming their results is the only way to get tenure, they will be very hesitant to pull off brazen frauds since thorough peer review is now one of the scientists’ most important tasks, rather than an afterthought in the hunt for more bylines…

axion model

Not that long ago, I wrote an open letter to the Standard Model, the theoretical, in the scientific sense of the word, framework that describes the structure and behavior of particles that make up the universe as we know it. While this letter confirmed many of is successes, especially with the confirmation of the Higgs boson, it referred to the need for it to somehow be broken for the world of physics to move forward, citing knowledge of something that lay beyond it. Considering that it was a pretty vague reference, I thought it would be a good idea to revisit it and elaborate as to why we need something beyond the Standard Model to explain the universe. Yes, parts of the problem have to do with the transition between quantum and classical states which we are still trying to understand, but the bigger problem is the vast chasm between the masses of each and every particle covered by the model and the mass associated with gravity taking over from the quantum world and responsible for the cosmos as we know it on a macro scale.

So why is the Higgs some 20 orders of magnitude too light to help explain the gap between the behavior of quantum particles and the odd gravitational entities that we’re pretty sure make up the fabric of space and time? Well, the answer to that is that we really don’t know. There are a few ideas, one in vogue right now gives new life to a nearly 40 year old hypothesis of a particle known as an axion. The thought is that low mass particles with no charge just nudged the mass of the Higgs into what it is today during the period of extremely rapid inflation right after the Big Bang, creating the gap we see today, rather than holding on to the idea that the Higgs came to exist at its current mass of 125 GeV and hasn’t gained or lost those 5 vanity giga-electron volts those health and fitness magazines for subatomic particles are obsessed with. A field of axions could slightly warp space and time, making all sorts of subtle changes that cumulatively have a big effect on the universe,which also makes them great candidates for dark matter.

All right, so people have been predicting the existence of axions for decades and they seem to fill out so many blank spots in cosmology so well that they might be the next biggest thing in all of physics. But do they actually exist? Well, they might. We think some may have been found in anomalous X-ray emissions from the sun, though not every expert agrees, and there are a few experiments hunting for stronger evidence of them. Should we find unequivocal proof that they exist just as the equations predict they should, with the right mass and charge, one could argue you would have a discovery even bigger than that of the Higgs because it solves three massive problems in cosmology and quantum mechanics in one swoop. But until we do, we’re still stuck with the alarming thought that after the LHC ramps up to full power, it wouldn’t show us a new particle or evidence of new physics, and future colliders would never have the oomph to cover the enormous void between Standard Model and gravitational particles. And this is why it would be so great if we detect axions or the LHC manages to break particle physics as we know it…