Archives For science

head in sand

Here’s an extremely uncomfortable truth no one currently running for office in the U.S., or even remotely considering doing so ever wants to publicly admit. There are a lot of voters who really, really don’t like experts, scientists, or anyone well educated in anything other than medicine. In their eyes, any sign of intellectualism is not something to cheer or aspire to, to them it’s nothing more than pretension from someone they’re convinced thinks he or she is better than them and feels entitled to tell them what to do. At the same time, they’re extremely paranoid that they will have something valuable or important taken away from them to be given to all the undeserving moochers on lower socioeconomic rungs than they are, convinced that the American poor have already been living it up with free spending money, free food, and free world-class medical care for decades. So when a politician decides to cozy up to this constituency, his best bet is to start witch hunts against their most nightmarish moochers: government-funded scientists.

In his tenure as the chairman of the House Science and Technology Committee, a haven for a disturbing number of peddlers of anti-scientific twaddle, congressman Lamar Smith decided to do exactly that with his open-ended fishing expeditions into every possible aspect of scientists’ research in his quest to find some grand conspiracies to publicly squash for his science-averse, paranoid base’s delight. In his investigation of climate scientists working for NOAA, he specified absolutely no instances of misconduct he thinks occurred, only asked for ever more raw data to be provided to him, even though the data and the methods used to analyze it have been on the web for years, provided by NOAA to anyone even slightly curious. But data is not what Smith is really after, because he has no interest in the actual science. He and his donors are upset that updated data for atmospheric warming gathered from additional sources after years of looking over more and more observation stations eliminated the “pause” to which denialists cling. Since the only possibility in their minds is that the data is faked, they want evidence of fakery.

Really there’s no other way to put it. Smith wants to have private communications between the scientists funded by NOAA to create another Climategate, which denialists are still convinced is an actual scandal despite the scientists being cleared of any wrongdoing, and if he doesn’t find something badly worded when taken out of context, or something politically incorrect, he will be taking something he doesn’t understand — which is likely most of the things being discussed by climatologists — and is being paid by oil and gas lobbies to continue not understanding, way out of context and manufacture a scandal out of that. When the chairman of the science committee which decides on funding for countless basic research projects his nation needs to maintain the top spot for scientific innovation in the world thinks his job is to harass scientists he doesn’t like because his donors’ business may be adversely impacted by their findings, until some pretense to interrogate them comes up, no matter how flimsy, we have a very serious problem. While all abuses of power are bad, abuses by partisan dullards have a certain awfulness about them, as they ridicule when they seem to utterly lack the capacity to understand in the first place

math prodigy

According to overenthusiastic hacks at Wired, scientists have recently developed a way to scan your brain to predict just how intelligent someone is or how good you’ll be at certain tasks. This sounds like the beginning of a dystopian nightmare, rather than an actual field of research, that will end up with mandatory brain scans for everyone to “facilitate an appropriate job function” in some dark, gray lab in front of medical paper pushers, true. But it only sounds like this because the writer is more interested in page views than the actual study, which really has nothing to do with one’s intelligence but actually tested whether you could identify someone by scanning how this person’s brain is wired. Rather than trying to develop IQ tests in a box, the researchers put the theory that your brain wiring is so unique that getting a map of it could identify you every bit as well as a fingerprint, to the test. Not surprisingly, they found that a high quality fMRI scan of your brain at work performing some standard tests can definitely be used to identify you.

All right, that’s all fine and well, after all, the fMRI scan is basically giving you insight into unique personalities, and no two people’s brains will work the same way. But where exactly would this whole thing about measuring intelligence come into play? Well, the concept of fluid intelligence, mentioned only three times in the study, was brought up as an additional avenue of research in light of the findings and revolves around the idea that certain parts of the brain having a strong connection will make you notably better at making inferences to solve new problems. Unlike its counterpart, crystallized intelligence (called Gc in neuroscience), fluid intelligence (or Gf) is not what you know, but how well you see patterns and come up with ideas. Most IQ tests today are heavily focused on Gf because it’s seen as a better measure of intelligence and the elaboration on what exactly the fingerprinting study had to do with predicting Gf was an extended citation of a study from 2012 which found a link between the lateral prefrontal cortex’s wiring to the rest of the brain and performance standardized on tests designed to measure Gf in 94 people.

Here’s the catch though. Even though how well your lateral prefrontal cortex talks to the rest of your brain does account for some differences in intelligence, much like your brain size, it really only explains 5% of these differences. Current theory holds that because your prefrontal cortex functions as your command and control center, what Freud described as the ego, a strong link between it and several other important parts of the brain will keep you on task and allow you to problem-solve more efficiently. Like a general commanding his troops, it makes sure that every other relevant part of your mind is fully engaged with the mission. But even if that theory is right and your preforntal cortex is well wired in a larger than median brain, close to 90% of what you would score on an IQ test can come down to level of education and other factors that generally make household income and education a better predictor of IQ scores than biology. Although in many ways it’s not that accurate either because style of learning and culture also play a role. All we can conclude is that the interplay between Gf, Gc, and education is very complex.

We should also take note of one study of popular theories of biological contributors to Gf which spanned 44,600 people and found no evidence that a combination of fMRI maps has predictive power when it comes to IQ points. In other words, we have a lot of ideas that seem plausible as to the biological origins of intelligence, but because our brains are very plastic, we are not all on a level playing field when it comes to the amount and quality of education we receive, and even our longest-running efforts for accurate Gc assessments have shown that we’re really bad at it, studies that claim predictive powers when it comes to our IQs using brain scans of 100 college students or fewer are extremely likely overselling their results. Not only that, but even when the studies do actively oversell, they still claim to explain only a tiny fraction of the score differences because they recognize how small and homogeneous their data sets really are. Not only do we not have an fMRI based tests for intelligence, we’re not even sure it’s possible. But those facts bring in far, far fewer page views than invoking kafkaesque sci-fi lore in a pop sci post…

eye of providence scroll

For as long as there have been conspiracy theories, there have been explanations for why the vast community of people who hang on conspiracy theorists’ every word exist. Some might just be paranoid in general. Others may be exercising their hatred or suspicion of a particular group of people, be they an ethnic group or a political affiliation. Others might just want to sound as if they’re smarter and more incisive than everyone else. Others still seek money and attention in their pursuit of a stable career of preaching to the tinfoil choir. But that doesn’t answer the really big question about the constant popularity of conspiracy theories throughout the ages. Is there something specific about how the believers are wired that makes the more prone to believe? Is ascribing to 9/11 Trutherism, or fearing Agenda 21, or looking for alien ancestry in one’s blood actually a case of a brain generally seeing patterns in randomness and conspiracy theories are just an outlet waiting to tap into this condition? Swiss and French researchers recently decided to try and answer that question by experimenting on college students and the public.

First, they evaluated whether their test subjects would detect patterns in truly random coin flips and doctored ones, with and without priming them. Then, they would ask political questions to measure the degree of conspiratorial thinking and level of belief in popular theories such as the notion that the Moon landing was faked or 9/11 was an inside job of some sort. Obviously, they found that they more conspiratorial view of politics the subjects took, they more likely they were to be Moon hoaxers and 9/11 Truthers, but paradoxically, that had absolutely no reflection on if they claimed to see human interference in random patterns of coin flips or identify sequences a researcher manipulated, priming or no priming. In other words, in everyday, low level tasks, the mind of a conspiracy theorist doesn’t see more patterns in randomness. As the researchers put it themselves, for a group of people who like to say that nothing happens by accident, they sure don’t think twice if something apolitical and mundane has been randomly arranged.

What does this finding mean in the grand scheme of things? Well, for one it means that there’s really no one type of person just wired for conspiratorial thinking or whose brain wiring plays an important role in ascribing to conspiracy theories. Instead, it’s more likely that all these theories are extreme manifestations of certain political beliefs or personal fears and dislikes, so the best predictor of being part of the tinfoil crowd is political affiliation. It’s not too terribly surprising if we consider that most climate change denialists who fear some sort of implementation of a sinister version of Agenda 21 they imagined exists are on the far right, while those terrified of anything involving global vaccination or commercial agreements are on the far left. And while there are a few popular conspiracy theories that overlap because people are complex and can hold many, many views even if they are contradictory, you can separate most of the common theories into ones favored by conservatives and ones favored by liberals. And as for what biology is involved in that, well, that’s been a minefield of controversy and statistical maelstroms for a long time…


Whenever I write a post about why you can’t just plug a human brain or a map of it into a future computer and expect to get a working mind as a result, two criticisms inevitably get sent to both my inbox and via social media. The first says that I’m simply not giving enough credit to a future computer science lab because the complexity of a task hasn’t stopped us before and it certainly won’t stop us again. The second points to a computer simulation, such as the recent successful attempt to recreate a second of human brain activity, and say it’s proof that all we need is just a little more computing oomph before we can create a digital replica of the human brain. The first criticism is a red herring because it claims that laying out how many proponents of this idea are severely underestimating the size and scope of the problem is the equivalent of saying that it’s simply too hard to do, while the actual argument is that brains don’t work like computers, and to make computers work more like brains can only get you so far. The second criticism, however, deserves a more in-depth explanation because it’s based on a very hard to spot mistake…

You see, we can simulate how neurons work fairly accurately based on what we know about all the chemical reactions and electrical pulses in their immediate environment. We can even link a lot of them together and see how they’ll react to virtual environments to test our theories of the basic mechanics of the human brain and generate new questions to answer in the lab. But this isn’t the same thing as emulating the human brain. If you read carefully, the one second model didn’t actually consider how the brain is structured or wired. It was a brute force test to see just how much power it should take for a typical modern computer architecture to model the human brain. And even if we provide a detailed connectome map, we’ll just have a simulated snapshot frozen in time, giving us mathematical descriptions of how electrical pulses travel. We could use that to identify interesting features and network topologies, but we can’t run it forward, change it in response to new stimuli at random, and expect that a virtual mind resembling that of the test subject whose brain was used would suddenly come to life and communicate with us.

dingy lab

About a month ago, health and science journalist Christie Aschwanden took on the tough job of explaining why, despite a recent rash of peer review scandals, science isn’t broken by showing how media hype and researchers’ need to keep on publishing make it seem as if today’s study investigating something of general interest will be contradicted by tomorrow’s, if not shown as a complete and utter fraud. It’s nothing you really haven’t heard before if you follow a steady diet of popular science and tech blogs, although her prescription for dealing with whiplash inducing headlines from the world of science is very different from that of most science bloggers. As she puts it, we should simply expect that what we see isn’t necessarily the whole story and carefully consider that the scientists who found a positive result were out to prove something and might be wrong not because they’re clueless or manipulative, but because they’re only human.

Now, while this is all true, it’s extremely difficult not to notice that in today’s academic climate of obscenely overpaid college bureaucrats publishing scientists to publish countless papers just to be considered for a chance to keep working in their scientific fields after their early 40s, there’s incessant pressure to churn out a lot of low quality papers, then promote them as significant for anyone to cite them. Even if you published a very vague, tenuous hypothesis-fishing expedition just to pad your CV and hit the right number to keep the funding to your lab going, there’s plenty of pressure to drum up media attention by writers guaranteed to oversell it because if you don’t promote it, it will get lost among a flood of similar papers and no one will cite it, meaning that an extra publication won’t help you as much when the tenure committee decides your fate because its low quality will be evident by the complete lack of attention and citations. Long gone are days of scientists routinely taking time to let ideas mature into significant papers, and that’s awful.

Instead of realizing that science is a creative process which needs time and plenty of slack as it often bumps into dead ends in search of important knowledge, colleges have commoditized the whole endeavor into a publication factory and judge researchers on how they’re meeting quotas rather than the overall impact their ideas have on the world around them. Sure, they measure if the papers have been cited, but as we’ve seen, it’s an easily gamed metric. In fact, every single measure of a scientist’s success today can be manipulated so good scientists have to publish a lot of junk just to stay employed, and bad scientists can churn out fraudulent, meaningless work to remain budgetary parasites on their institutions. Quantity has won over quality, and being the generally very intelligent people that they are, scientists have adapted. Science is not broken in the sense that we can no longer trust it to correct itself and discover new things. But it has been broken the way it’s practiced day to day, and it will not be fixed until we go back to the day when the scope and ambition of the research is what mattered, rather than the number of papers.

icelandic lake

As the jokes about global warming go, since humans like warm weather, what’s so bad about a little melting permafrost and new beachfront properties after the seas rise? Well, aside from the aftermath of ocean acidification and its impact on marine life we eat, as well as the rising costs of adapting to the swelling tides, and replacing the infrastructure that will be damaged by thaws in the previously solid permafrost layers, there’s also the threat of disease. And not just any old chest cold or flu we’re used to, but viruses tens of thousands of years old which were menacing our cave-dwelling ancestors before ending up in suspended animation. While so far only mild or benign viruses have been found in permafrost samples, the researchers are worried that there are good reasons to suspect various strains of plague or even smallpox are hiding under snow and ice, and will thaw back to life to infect a population which considers them long gone, with a bare minimum of natural immunity to their full ravages, and plenty of perfectly viable hosts.

Now, I know, this sounds like the opening act of a low budget sci-fi movie where some terrifying ancient virus shown in the prologue as annihilating an entire civilization, Atlantis perhaps, thaws as permafrost since the last Ice Age is disturbed by a construction crew with dire consequences and it’s up to an aspiring underwear model of a scientist called by a chiseled president who may be the scientist’s old friend to hunt down the anti-body producing McGuffin in some exotic parts of the world which fails to work, and then improvise a cure at the last possible minute as his kid or love interest is about to die of the disease. If you’re reading this post from a Hollywood studio office, drop me a line, let’s do lunch. But I digress. As unlikely as this scenario is, the odds of an old human-infecting bugaboo for which we may not have effective medication on hand stirred to life as the world warms is not zero, and we may want to start looking back into the viruses’ past to identify and design possible treatments ahead of time. If we don’t, millions might suffer.

Just consider what would happen should an ancient strain of smallpox return. Before worldwide vaccination campaigns, it was the greatest killer of our humble little species for 10,000 years, a culprit behind a third of all blindness, the main contributor to child mortality, and while we fought it off over the last century, it still managed to kill as many as 300 million of us. Before vaccines, the virus traveled across the Atlantic with Europeans, wiping out 90% of Native Americans while the first New World colonies were being established. Today we do have antiviral treatments we think should be able to subdue advanced cases, and post-infection vaccinations would help the patients recover, but this assumes that we’d be fighting the product of trillions of generations of coexistence with humans. A thawed strain could be so radically different by comparison, it may as well be from another planet, which could make it benign to us, or even deadlier. And as we’ll continue warming the planet with wild abandon, we might live to experience this in real life…

broken causality

Countless poems, essays, and novels have ruminated on the inexorable forward march of time, how it slowly but surely grinds even the mightiest empires to dust and has an equal fate in store for the wealthiest of the wealthy and the poorest of the poor. But that only seems to apply if you are larger than a subatomic particle. If you’re an electron or a photon, time seems to be a very fungible thing that doesn’t always flow as one would expect and regularly ignores a pillar of the fabric of space and time: the fundamental limits imposed on the exchange of information by the speed of light. But some scientists were hoping they could bring the quantum world to heel with better designed experiments, arguing that because we have not observed single photons in an entangled system changing state faster than the speed of light would allow, calculating a cloud of them with advanced statistical methods, perhaps the noise drowned out the signals.

Well, Dutch scientists with the help of several colleagues in France decided to try test quantum entanglement using stable, heavy electrons entangled with photons so they could observe how the systems changed on stable particles, without worrying about decoherence. After managing to successfully entangle the system 245 times they collected enough data to plug into a formula known as Bell’s inequality, designed to determine if there are hidden variables in an experiment involving quantum systems. The result? No hidden variables could have been present while the spooky action of instantly changing quantum systems was reliably observed every time. It’s one of the most thorough and complete tests of quantum causality ever undertaken, and there have been a few murmurings of a potential Nobel Prize for the work. However, the paper is still under peer review and with the widespread attention to it, is bound to be scrutinized for flaws.

What does this mean for us? Well, it shows that we’re right about weird physics on a subatomic level happening exactly as counter-intuitively and inexplicably as we thought. But it also tells us that we can’t narrow down a simple conclusion and hints that some of the laws of physics might be scale variant, i.e. different depending on the size and scope of the objects they affect, and a scale-variant universe is going to make coming up with a unified theory of everything way more difficult than it already is because we now need to understand why it works that way. But again, this is science at its finest. We’re not trying to come up with one definitive answer to everything just by running enough experiments or watching the world around us for a long time, we’re just trying to expand how much we know to expand out horizons, finding answers and raising new questions which may be answered centuries down the road. Sometimes just knowing what you don’t know can be a big step forward because you now at least know where to start looking for an answer to a particularly nagging or difficult problem and where you will hit a dead end.


Imagine a problem with seemingly countless solutions, a paradox that’s paradoxically solved by completely unrelated mechanisms some of which violate the rules of physics as we know them, while others raise more questions than they provide answers. That paradox is what happens to an object unfortunate enough to fall into a black hole. Last time we talked about this puzzle, we reviewed why the very concept of something falling into a black hole is an insanely complicated problem which plays havoc with what we think we know about quantum mechanics. Currently, a leading theory posits that tiny wormholes allow for the scrambled particles of the doomed object to maintain some sort of presence in this universe without violating the laws of physics. But not content with someone else theories, and knowing full well that his last finding about black holes made them necessary in the first place, as explained by the linked post, Stephen Hawking now claims to have found a new solution to the paradox and will be publishing a paper shortly.

While we don’t know the exact wording of the paper, we know enough about his solution to say that he has not really found a satisfactory answer to the paradox. Why? His answer rests on an extremely hard to test notion that objects falling to a black hole are smeared across the edge of the event horizon and emit just enough photons for us to reconstruct holographic projections of what it once was. Unfortunately, it would be more scrambled than the Playboy channel on really old TVs, so anyone trying to figure out what the object was probably won’t be able to do it. But it will be something as least, which is all that thermodynamics needs to balance out the equations and make it seem that the paradox has been solved. Except it really hasn’t because we haven’t the slightest idea of how to test this hypothesis. It still violates monogamous entanglement, and because the photons we’re supposed to see are meant to be scrambled into unidentifiable flash of high speed, high energy particles, good luck proving the original source of the information.

Unless we physically travel to a black hole and dropping a powerful probe into it, we would only have guesses and complex equations we couldn’t rule out with practical observations. Sadly, a probe launched today would take 55.3 million years to get to the nearest one, which means any practical experiments are absolutely out of the question. Creating micro black holes as both an experiment for laboratory study and a potential relativistic power source, would take energy we can’t really generate right now, rendering experiments in controlled conditions impossible for a long time. And that means we’re very unlikely to be closer to solving the black hole information paradox for the foreseeable future unless by some lucky coincidence we’ll see something out in deep space able to shed light on the fate of whatever falls into a black hole’s physics-shattering maw, regardless of what the papers tell you or the stature of the scientist making the claim…


Just like the most common advice to men and women is not to sleep with crazy, when Chipotle decided to pander to the anti-GMO crowd, the left’s version of ardent climate change denialists who don’t even want to let scientists conduct safety studies on modified crops, much less admit that they’re safe, someone should’ve told the company not to pander to anti-GMO hysterics. It was clearly a move to keep cash flowing from a younger, lefty demographic, and while the junk science in press releases and store signs proclaimed the chain GMO-free, the reality was that much of the feed used to raise the animals which would be used to make supposedly pure and natural tacos and burritos, was actually heavily genetically modified. Thanks to that technicality, there is now a high profile lawsuit accusing Chipotle of false advertising. While their food is not genetically modified, just as claimed, the ingredients that once used to move and make noises ate feed that was, therefore, the chain is still tainted and consumers who were told otherwise in the chains G-M-Over It ads were misled and falsely trusted the company with their health.

Much in the same way devout kosher Jews wouldn’t want to use a dairy spoon to eat beef stew, the people slavishly devoted to track all the world’s ills to GMOs and Monsanto, are not going to be happy that something modified in a lab may have come in contact with what’s on their plates and the chain is going to have to double down on its claims to keep their new customers. But to fight off the suit, they’re actually using real science, stating that eating something modified does not mean your genes will be modified in turn, and any claim otherwise is nonsense. And that’s a true statement. But then why exactly should GMOs be off the menu? What exact danger does a meal genetically modified pose to diners who won’t be absorbing its DNA, and which had to run through a gamut of tests to rule out any of the several million proteins identified as allergens or possible toxins? Oh, right, the danger of the scientifically ignorant told by businesses hawking a lot of overpriced “natural” and “organic” stuff running out the door in fear of GMO cooties.

Don’t feel bad for Chipotle because it’s getting its proper comeuppance for marketing to a vocal and dogmatic ideology after smelling easy money. Nothing a national chain that’s trying to feed millions of people a day does will ever be pure enough for paranoid zealots and to keep up this facade will only lose them time and money over the long term. They can expect more nuisance suits like this, high profile coverage of these suits, and many more laughing pundits like me who won’t hesitate to point out that they brought it on themselves. Gordon Gecko was right to a very certain extent when he proclaimed that greed is good in the world of business. But there’s a big and important corollary to that. Clever, calculated greed which seeks out new markets where to sell needed, wanted, and useful products is terrific. Knee-jerk, follow-the-crowd money grabs in a demographic better known for histrionics, hyperbole, and over-sized wallets is usually bond to backfire the second you fail to be as fanatical and dogmatic as them, which is not a matter of if, but when. You’ll get boycotted, and your greedy ways will yield the media a lot of mileage…

human heart

When it comes to preserving donated organs for transplantation, the last several decades gave doctors only one choice to keep them alive long enough to be useful. Chilled and transported to the recipients as quickly as possible to avoid spoilage. But a new generation of technology built with a much better understanding of organ structure and function is giving us a new option. Say goodbye to coolers and hello to sterile biospheres where organs are kept warm, fed, and with a private circulatory system until they’re ready to be transplanted. All of the surgeries done using warm, functioning organs have been a successes thus far, and the companies who make these organ-preserving devices are already eyeing improvements in sustaining organs using nutrient and temperature settings the donor organs need for their unique conditions, sizes, and shapes, instead of a general treatment for their organ type. Think of it as the donated organ getting first class transportation to its new home. But that’s making some people feel a bit uneasy…

According to reactions covered by MIT’s Technology Review, and repeated elsewhere, organs being restored to full function may be blurring the line between life and death, and not waiting a proper period of time means that instead of donating organs of a deceased patient, doctors are actually killing someone by harvesting his or her organs so others can live. In some respect, we do expect that sort of triage in hospital settings because after all, there’s only so much even the best medical techniques and devices can do to help patients and if doctors know that all efforts will be in vain, it only makes sense to save time, money, and resources, and give others a shot with the organs they need, something always in short supply. Wait too long to harvest the heart, liver, and kidneys, and they’ll start to die putting the would-be recipient at risk of life-threatening complications or outright transplant failure. However, if you don’t wait long enough, are you just helping death do its job and killing a doomed patient while her family watches? The fuzzier and fuzzier lines between life and death make this a very complicated legal and ethical matter.

But even considering this complex matter, the objections against refined organ harvesting miss something very important. Doctors are not taking patients who can make a full recovery into the operating room, extracting vital organs, putting them in these bio-domes, and sending them out to people in need of a transplant. These organs come from those who are dead or would die as soon as the life support systems are shut off with no possibility of recovery. Revive hearts which stopped after a patient died of circulatory disease and the patient will die again. Support organs inside the body of someone who is brain dead, or so severely brain damaged that recovery just can’t happen, and all you’re doing is extending the inevitable. It takes a lot more than a beating heart or working liver to actually live and these new preservation devices are not giving doctors an incentive to let someone die, much less speed up a patient’s death. They’re giving us a very necessary bridge towards the artificial or stem-cell grown organs we are still trying to create as thousands die of organ failure we can fix if only we could get them the organs they need…