Archives For science


Whenever I write a post about why you can’t just plug a human brain or a map of it into a future computer and expect to get a working mind as a result, two criticisms inevitably get sent to both my inbox and via social media. The first says that I’m simply not giving enough credit to a future computer science lab because the complexity of a task hasn’t stopped us before and it certainly won’t stop us again. The second points to a computer simulation, such as the recent successful attempt to recreate a second of human brain activity, and say it’s proof that all we need is just a little more computing oomph before we can create a digital replica of the human brain. The first criticism is a red herring because it claims that laying out how many proponents of this idea are severely underestimating the size and scope of the problem is the equivalent of saying that it’s simply too hard to do, while the actual argument is that brains don’t work like computers, and to make computers work more like brains can only get you so far. The second criticism, however, deserves a more in-depth explanation because it’s based on a very hard to spot mistake…

You see, we can simulate how neurons work fairly accurately based on what we know about all the chemical reactions and electrical pulses in their immediate environment. We can even link a lot of them together and see how they’ll react to virtual environments to test our theories of the basic mechanics of the human brain and generate new questions to answer in the lab. But this isn’t the same thing as emulating the human brain. If you read carefully, the one second model didn’t actually consider how the brain is structured or wired. It was a brute force test to see just how much power it should take for a typical modern computer architecture to model the human brain. And even if we provide a detailed connectome map, we’ll just have a simulated snapshot frozen in time, giving us mathematical descriptions of how electrical pulses travel. We could use that to identify interesting features and network topologies, but we can’t run it forward, change it in response to new stimuli at random, and expect that a virtual mind resembling that of the test subject whose brain was used would suddenly come to life and communicate with us.

dingy lab

About a month ago, health and science journalist Christie Aschwanden took on the tough job of explaining why, despite a recent rash of peer review scandals, science isn’t broken by showing how media hype and researchers’ need to keep on publishing make it seem as if today’s study investigating something of general interest will be contradicted by tomorrow’s, if not shown as a complete and utter fraud. It’s nothing you really haven’t heard before if you follow a steady diet of popular science and tech blogs, although her prescription for dealing with whiplash inducing headlines from the world of science is very different from that of most science bloggers. As she puts it, we should simply expect that what we see isn’t necessarily the whole story and carefully consider that the scientists who found a positive result were out to prove something and might be wrong not because they’re clueless or manipulative, but because they’re only human.

Now, while this is all true, it’s extremely difficult not to notice that in today’s academic climate of obscenely overpaid college bureaucrats publishing scientists to publish countless papers just to be considered for a chance to keep working in their scientific fields after their early 40s, there’s incessant pressure to churn out a lot of low quality papers, then promote them as significant for anyone to cite them. Even if you published a very vague, tenuous hypothesis-fishing expedition just to pad your CV and hit the right number to keep the funding to your lab going, there’s plenty of pressure to drum up media attention by writers guaranteed to oversell it because if you don’t promote it, it will get lost among a flood of similar papers and no one will cite it, meaning that an extra publication won’t help you as much when the tenure committee decides your fate because its low quality will be evident by the complete lack of attention and citations. Long gone are days of scientists routinely taking time to let ideas mature into significant papers, and that’s awful.

Instead of realizing that science is a creative process which needs time and plenty of slack as it often bumps into dead ends in search of important knowledge, colleges have commoditized the whole endeavor into a publication factory and judge researchers on how they’re meeting quotas rather than the overall impact their ideas have on the world around them. Sure, they measure if the papers have been cited, but as we’ve seen, it’s an easily gamed metric. In fact, every single measure of a scientist’s success today can be manipulated so good scientists have to publish a lot of junk just to stay employed, and bad scientists can churn out fraudulent, meaningless work to remain budgetary parasites on their institutions. Quantity has won over quality, and being the generally very intelligent people that they are, scientists have adapted. Science is not broken in the sense that we can no longer trust it to correct itself and discover new things. But it has been broken the way it’s practiced day to day, and it will not be fixed until we go back to the day when the scope and ambition of the research is what mattered, rather than the number of papers.

icelandic lake

As the jokes about global warming go, since humans like warm weather, what’s so bad about a little melting permafrost and new beachfront properties after the seas rise? Well, aside from the aftermath of ocean acidification and its impact on marine life we eat, as well as the rising costs of adapting to the swelling tides, and replacing the infrastructure that will be damaged by thaws in the previously solid permafrost layers, there’s also the threat of disease. And not just any old chest cold or flu we’re used to, but viruses tens of thousands of years old which were menacing our cave-dwelling ancestors before ending up in suspended animation. While so far only mild or benign viruses have been found in permafrost samples, the researchers are worried that there are good reasons to suspect various strains of plague or even smallpox are hiding under snow and ice, and will thaw back to life to infect a population which considers them long gone, with a bare minimum of natural immunity to their full ravages, and plenty of perfectly viable hosts.

Now, I know, this sounds like the opening act of a low budget sci-fi movie where some terrifying ancient virus shown in the prologue as annihilating an entire civilization, Atlantis perhaps, thaws as permafrost since the last Ice Age is disturbed by a construction crew with dire consequences and it’s up to an aspiring underwear model of a scientist called by a chiseled president who may be the scientist’s old friend to hunt down the anti-body producing McGuffin in some exotic parts of the world which fails to work, and then improvise a cure at the last possible minute as his kid or love interest is about to die of the disease. If you’re reading this post from a Hollywood studio office, drop me a line, let’s do lunch. But I digress. As unlikely as this scenario is, the odds of an old human-infecting bugaboo for which we may not have effective medication on hand stirred to life as the world warms is not zero, and we may want to start looking back into the viruses’ past to identify and design possible treatments ahead of time. If we don’t, millions might suffer.

Just consider what would happen should an ancient strain of smallpox return. Before worldwide vaccination campaigns, it was the greatest killer of our humble little species for 10,000 years, a culprit behind a third of all blindness, the main contributor to child mortality, and while we fought it off over the last century, it still managed to kill as many as 300 million of us. Before vaccines, the virus traveled across the Atlantic with Europeans, wiping out 90% of Native Americans while the first New World colonies were being established. Today we do have antiviral treatments we think should be able to subdue advanced cases, and post-infection vaccinations would help the patients recover, but this assumes that we’d be fighting the product of trillions of generations of coexistence with humans. A thawed strain could be so radically different by comparison, it may as well be from another planet, which could make it benign to us, or even deadlier. And as we’ll continue warming the planet with wild abandon, we might live to experience this in real life…

broken causality

Countless poems, essays, and novels have ruminated on the inexorable forward march of time, how it slowly but surely grinds even the mightiest empires to dust and has an equal fate in store for the wealthiest of the wealthy and the poorest of the poor. But that only seems to apply if you are larger than a subatomic particle. If you’re an electron or a photon, time seems to be a very fungible thing that doesn’t always flow as one would expect and regularly ignores a pillar of the fabric of space and time: the fundamental limits imposed on the exchange of information by the speed of light. But some scientists were hoping they could bring the quantum world to heel with better designed experiments, arguing that because we have not observed single photons in an entangled system changing state faster than the speed of light would allow, calculating a cloud of them with advanced statistical methods, perhaps the noise drowned out the signals.

Well, Dutch scientists with the help of several colleagues in France decided to try test quantum entanglement using stable, heavy electrons entangled with photons so they could observe how the systems changed on stable particles, without worrying about decoherence. After managing to successfully entangle the system 245 times they collected enough data to plug into a formula known as Bell’s inequality, designed to determine if there are hidden variables in an experiment involving quantum systems. The result? No hidden variables could have been present while the spooky action of instantly changing quantum systems was reliably observed every time. It’s one of the most thorough and complete tests of quantum causality ever undertaken, and there have been a few murmurings of a potential Nobel Prize for the work. However, the paper is still under peer review and with the widespread attention to it, is bound to be scrutinized for flaws.

What does this mean for us? Well, it shows that we’re right about weird physics on a subatomic level happening exactly as counter-intuitively and inexplicably as we thought. But it also tells us that we can’t narrow down a simple conclusion and hints that some of the laws of physics might be scale variant, i.e. different depending on the size and scope of the objects they affect, and a scale-variant universe is going to make coming up with a unified theory of everything way more difficult than it already is because we now need to understand why it works that way. But again, this is science at its finest. We’re not trying to come up with one definitive answer to everything just by running enough experiments or watching the world around us for a long time, we’re just trying to expand how much we know to expand out horizons, finding answers and raising new questions which may be answered centuries down the road. Sometimes just knowing what you don’t know can be a big step forward because you now at least know where to start looking for an answer to a particularly nagging or difficult problem and where you will hit a dead end.


Imagine a problem with seemingly countless solutions, a paradox that’s paradoxically solved by completely unrelated mechanisms some of which violate the rules of physics as we know them, while others raise more questions than they provide answers. That paradox is what happens to an object unfortunate enough to fall into a black hole. Last time we talked about this puzzle, we reviewed why the very concept of something falling into a black hole is an insanely complicated problem which plays havoc with what we think we know about quantum mechanics. Currently, a leading theory posits that tiny wormholes allow for the scrambled particles of the doomed object to maintain some sort of presence in this universe without violating the laws of physics. But not content with someone else theories, and knowing full well that his last finding about black holes made them necessary in the first place, as explained by the linked post, Stephen Hawking now claims to have found a new solution to the paradox and will be publishing a paper shortly.

While we don’t know the exact wording of the paper, we know enough about his solution to say that he has not really found a satisfactory answer to the paradox. Why? His answer rests on an extremely hard to test notion that objects falling to a black hole are smeared across the edge of the event horizon and emit just enough photons for us to reconstruct holographic projections of what it once was. Unfortunately, it would be more scrambled than the Playboy channel on really old TVs, so anyone trying to figure out what the object was probably won’t be able to do it. But it will be something as least, which is all that thermodynamics needs to balance out the equations and make it seem that the paradox has been solved. Except it really hasn’t because we haven’t the slightest idea of how to test this hypothesis. It still violates monogamous entanglement, and because the photons we’re supposed to see are meant to be scrambled into unidentifiable flash of high speed, high energy particles, good luck proving the original source of the information.

Unless we physically travel to a black hole and dropping a powerful probe into it, we would only have guesses and complex equations we couldn’t rule out with practical observations. Sadly, a probe launched today would take 55.3 million years to get to the nearest one, which means any practical experiments are absolutely out of the question. Creating micro black holes as both an experiment for laboratory study and a potential relativistic power source, would take energy we can’t really generate right now, rendering experiments in controlled conditions impossible for a long time. And that means we’re very unlikely to be closer to solving the black hole information paradox for the foreseeable future unless by some lucky coincidence we’ll see something out in deep space able to shed light on the fate of whatever falls into a black hole’s physics-shattering maw, regardless of what the papers tell you or the stature of the scientist making the claim…


Just like the most common advice to men and women is not to sleep with crazy, when Chipotle decided to pander to the anti-GMO crowd, the left’s version of ardent climate change denialists who don’t even want to let scientists conduct safety studies on modified crops, much less admit that they’re safe, someone should’ve told the company not to pander to anti-GMO hysterics. It was clearly a move to keep cash flowing from a younger, lefty demographic, and while the junk science in press releases and store signs proclaimed the chain GMO-free, the reality was that much of the feed used to raise the animals which would be used to make supposedly pure and natural tacos and burritos, was actually heavily genetically modified. Thanks to that technicality, there is now a high profile lawsuit accusing Chipotle of false advertising. While their food is not genetically modified, just as claimed, the ingredients that once used to move and make noises ate feed that was, therefore, the chain is still tainted and consumers who were told otherwise in the chains G-M-Over It ads were misled and falsely trusted the company with their health.

Much in the same way devout kosher Jews wouldn’t want to use a dairy spoon to eat beef stew, the people slavishly devoted to track all the world’s ills to GMOs and Monsanto, are not going to be happy that something modified in a lab may have come in contact with what’s on their plates and the chain is going to have to double down on its claims to keep their new customers. But to fight off the suit, they’re actually using real science, stating that eating something modified does not mean your genes will be modified in turn, and any claim otherwise is nonsense. And that’s a true statement. But then why exactly should GMOs be off the menu? What exact danger does a meal genetically modified pose to diners who won’t be absorbing its DNA, and which had to run through a gamut of tests to rule out any of the several million proteins identified as allergens or possible toxins? Oh, right, the danger of the scientifically ignorant told by businesses hawking a lot of overpriced “natural” and “organic” stuff running out the door in fear of GMO cooties.

Don’t feel bad for Chipotle because it’s getting its proper comeuppance for marketing to a vocal and dogmatic ideology after smelling easy money. Nothing a national chain that’s trying to feed millions of people a day does will ever be pure enough for paranoid zealots and to keep up this facade will only lose them time and money over the long term. They can expect more nuisance suits like this, high profile coverage of these suits, and many more laughing pundits like me who won’t hesitate to point out that they brought it on themselves. Gordon Gecko was right to a very certain extent when he proclaimed that greed is good in the world of business. But there’s a big and important corollary to that. Clever, calculated greed which seeks out new markets where to sell needed, wanted, and useful products is terrific. Knee-jerk, follow-the-crowd money grabs in a demographic better known for histrionics, hyperbole, and over-sized wallets is usually bond to backfire the second you fail to be as fanatical and dogmatic as them, which is not a matter of if, but when. You’ll get boycotted, and your greedy ways will yield the media a lot of mileage…

human heart

When it comes to preserving donated organs for transplantation, the last several decades gave doctors only one choice to keep them alive long enough to be useful. Chilled and transported to the recipients as quickly as possible to avoid spoilage. But a new generation of technology built with a much better understanding of organ structure and function is giving us a new option. Say goodbye to coolers and hello to sterile biospheres where organs are kept warm, fed, and with a private circulatory system until they’re ready to be transplanted. All of the surgeries done using warm, functioning organs have been a successes thus far, and the companies who make these organ-preserving devices are already eyeing improvements in sustaining organs using nutrient and temperature settings the donor organs need for their unique conditions, sizes, and shapes, instead of a general treatment for their organ type. Think of it as the donated organ getting first class transportation to its new home. But that’s making some people feel a bit uneasy…

According to reactions covered by MIT’s Technology Review, and repeated elsewhere, organs being restored to full function may be blurring the line between life and death, and not waiting a proper period of time means that instead of donating organs of a deceased patient, doctors are actually killing someone by harvesting his or her organs so others can live. In some respect, we do expect that sort of triage in hospital settings because after all, there’s only so much even the best medical techniques and devices can do to help patients and if doctors know that all efforts will be in vain, it only makes sense to save time, money, and resources, and give others a shot with the organs they need, something always in short supply. Wait too long to harvest the heart, liver, and kidneys, and they’ll start to die putting the would-be recipient at risk of life-threatening complications or outright transplant failure. However, if you don’t wait long enough, are you just helping death do its job and killing a doomed patient while her family watches? The fuzzier and fuzzier lines between life and death make this a very complicated legal and ethical matter.

But even considering this complex matter, the objections against refined organ harvesting miss something very important. Doctors are not taking patients who can make a full recovery into the operating room, extracting vital organs, putting them in these bio-domes, and sending them out to people in need of a transplant. These organs come from those who are dead or would die as soon as the life support systems are shut off with no possibility of recovery. Revive hearts which stopped after a patient died of circulatory disease and the patient will die again. Support organs inside the body of someone who is brain dead, or so severely brain damaged that recovery just can’t happen, and all you’re doing is extending the inevitable. It takes a lot more than a beating heart or working liver to actually live and these new preservation devices are not giving doctors an incentive to let someone die, much less speed up a patient’s death. They’re giving us a very necessary bridge towards the artificial or stem-cell grown organs we are still trying to create as thousands die of organ failure we can fix if only we could get them the organs they need…

death with rose

Starting a skeptical blog is exactly like starting any other blog. No committee requests to review your posts and approve the skeptical label, no regular audits of your content are held by JREF, or any other skeptical group, and the only third party classification of skepticism you’ll get would come from DMOZ, which would select a category to post a link to your blog so web crawlers for major search engines can quickly and easily index it. But at the same time, when you find blogs that use the s-word in their titles and tags, there’s a certain kind of content you expect from the posts and podcasts. You’ll be looking for references to scientific works, critical take on personal testimony and anecdotal evidence, and a distinct lack of conspiracy theories. Just imagine your surprise when a blog called Skeptico rushes to defend a doctor who claimed to have proof of a picturesque afterlife after a bout with meningitus from the “liberal atheist media” following a less than flattering expose of him and his troubled background in Esquire. Seems odd, right?

Yes, to be fair, the article seemed very clear about where it was going even before it started to officially challenge Dr. Eben Alexander’s story, which while very typical among those who went through near death experiences, was very much the kind of agenda-first journalism I decried a few weeks ago. But that said, while the Tinder story blatantly ignored science that sabotaged a point it wanted to make and its writer employed all manner of semantic games to wave it away, the tale about Alexander is unflattering, but factual. He had the training and skills to be a really great surgeon, but he made mistakes and tried to cover his tracks when caught by patients who were harmed by his inattention to detail. It’s very unlikely, at least to me, that he spun his tale of seeing the afterlife out of whole cloth, but it does seem likely he fine-tuned it to make sure it will fly off the shelves and get him maximum exposure. These are not tricks unknown to the market for books and public appearances by those claiming firsthand accounts of the afterlife.

And if we turn to Skeptico for a look under the name, we’ll find not so much a skeptical blog that looks into near death experiences as much as we will ardent supporters of these stories whose goal isn’t so much to find a scientific explanation for visions during NDEs, but to come up with a scientific word salad to support the idea of the afterlife. They are not skeptics but believers with an axe to grind against atheists and skeptical scientists and their entire proof of malfeasance in the story ran by Esquire is a conspiracy theory that the writer is carrying out orders from a dark cabal of atheists, liberals, and doctors threatened by Alexander’s story and desperate to take an accomplished neurosurgeon down a few notches. Throughout the transcript we never do learn exactly what was being lied about or evidence that quotes were being misappropriated, we are simply assured that it happened because, well, Mrs. Alexander says so. And if you keep looking around the site, you’ll find a dozen more hypercritical posts about Dr. Alexander’s skeptics.

Look, I get it. Airtight evidence of an afterlife, even a religiously ambiguous one, would make all the injustices, problems, and suffering of our existence much easier to bear. Knowing that your death would reunite you with lost loved ones and favorite pets would make a terminal diagnosis feel like a bit less of a burden. Humans, understanding their own mortality, have been picturing some sort of life after death since the first shamans and cave paintings, desperately hoping that this is not all there is to existence. But the fact of the matter is that we don’t have NDEs that are so thoroughly researched and inexplicable that we can cite them in peer reviewed literature and replicate them. If we did, religious snake oil salesmen wouldn’t be chasing people who suffered one to write stories about visiting the other side and speaking authoritatively about what we will encounter once we shed our mortal coil to an audience desperately eager for reassurance. The people who run and frequent Skeptico are part experiencers, part anxious believers, and in part victims of a lucrative market for the ultimate reassuring story. But they’re not skeptics.

relativity formulas

In a quote often credited to Albert Einstein, the famous scientist quips that if you can’t explain a concept to a six year old, you clearly don’t understand it yourself. Now, it may take a very bright six year old to truly comprehend certain concepts, but the larger point is perfectly valid and can be easily proven by analyzing the tactics of many snake oil salespeople hiding behind buzzword salads to obscure the fact that they’re just making things up on the spot. If you truly understand something, you should be able to come up with a very straightforward way to summarize it, as it was done here in a brilliant display of exactly this kind of concept. But sadly, scientists are really bad at straightforward titles for their most important units or work, their papers. Countless math, physics, computer science, and biology papers have paragraph-length titles so thick with jargon that they look as if they were written in another language entirely. And that carries a steep price, as a recent study analyzing citations of 140,000 scientific papers over six years shows.

You see, publishing a paper is important but it’s just half the work. The second crucial part of a scientist’s work is to get that paper cited by others in the field. The more prominent the journal, the more chances for citations, and the more citations, the more important the research is seen which means speaking gigs and potential applications for fame and profit. But as it turns out, it’s not just the journal and the work itself that matters. Shorter titles are objectively better and yield more citations because scientists looking at long, complicated titles get confused and won’t cite the research, unsure if anything in it actually applies to them. Quality of the work aside, the very fact that other experts can’t tell what you’re going on and on about is bad for science, leading to even more people doing the same work from scratch. To truly advance, science needs to build on previous work and if the existing work seems to be an odd fragment of alien gibberish at first glance, no one will review it further. So next time you write a scientific paper, keep its title short, sweet, and to the point. Or no one will read it, much less cite it as important to the field.

babel fish

When finding yourself in a debate with a partisan ideologue who claims that all higher education is simply anti-American socialist brainwashing, he will often bring up that Noam Chomsky is one of the most cited scholars in the world despite his penchant for left wing radical conspiracies he adamantly supports in his books. However, the reason why Chomsky is cited so often has zilch to do with his politics and everything to do with his study of language, particularly his theory of a universal grammar. According to his work, all human languages share common patterns which we can use to create universal translators and pinpoint the semantic details of each word with a proper context. This idea is particularly popular in the field of computer science, particularly in a number of AI experiments because it can give us algorithms for symbol grounding, a fancy term for deciding exactly what a word is supposed to represent in a given situation. This is one of the fundamental leaps needed to make for machines to truly understand what humans say.

Of course, as with any theory with the word universal in the title, there’s plenty of criticism about how universal it actually is, and some escalated into a full blown feud among linguists. Critics of the theory have went as far as to say that that universal grammar is whatever Chomsky wants it to be when it’s being debated, which in academia is actually a pretty vicious burn. But it’s rather expected since a theory that claims to apply to every language on the planet can be challenged with a single example that fails to conform to it, no matter how obscure. Considering that we not only have to consider modern languages, but the evolution of all known languages to make the theory airtight, there’s still a lot to flesh out in Chomsky’s defining work. Working with all modern languages is hard enough, but working with historical ones is even more challenging because a majority of modern human history was not recorded, and the majority of what has been is pretty sparse. I’d wager that 95% of all languages ever created are likely to be lost to time.

Even worse than that is knowing our languages change so much that their historical origins can be totally obscured with enough time. While the first physiologically modern humans evolved in North Africa some 100,000 years ago, a comparative analysis of today’s language patterns just doesn’t show any founder effect, meaning that if one of our first ancestors stumbled into a time machine and traveled to today, she would not be able to understand even a single sound out of our mouths without instruction from us. Research like this has led many linguists to believe that language is shaped by culture and history more than just the raw wiring of our brains as per the universal grammar theory. Others, disagree producing papers such as the recent MIT study of logical patterns in 37 languages showing that all of the languages prefer very similar rules when it comes to their grammatical style, meaning that the underlying logic had to be the same, even when comparing Ancient Greek to modern languages as different as English and Chinese.

By analyzing how closely related concepts cluster in sentences across all the languages chosen for the project, researchers found that all of them prefer to keep related concepts close to each other in what they considered a proper, grammatically correct sentence. To use the example in the study, in the sentence “John threw the trash out,” the domestic hero of our story was tied to his action and the villainous refuse was tied to where it was thrown. These concepts weren’t on the opposite sides of a sentence or at a random distance from each other. This is what’s known as dependency length minimization, or DLM, in linguist-speak. One of the few undisputed rules of universal grammar is that in every language, the core concepts’ DLM should be lower than a random baseline, and this study pretty solidly showed that they weren’t. In fact, every language seemed to have an extremely similar DLM measure to the others, seemingly proving one of the key rules of universal grammar. So where exactly does that leave the theory’s critics?

Well, as said before, calling any theory universal is fraught with problems and leaves it open to the most minor, nit-picking criticism, and we all know of exactly one society based around logic, and that’s the Vulcans from Star Trek. To dispute the theory, linguists had to go out of their way to tribes so vaguely aware of the modern world, we may as well be from another planet to them, and look for the smallest cultural inconsistencies that conflict with the current interpretation of a theory they say is somewhat vague. Certainly they could produce a language that eschews the rules of universal grammar in favor of tradition and religion, and maybe Chomsky can just tone his theory’s presumptuous name down a bit and accept that his work can’t apply to every single language humans have ever used or will invent in the future. But in the end, universal grammar does seem to appear extremely useful and shows that logic plays the most important part of all languages’ initial structures. We might not be able to use the theory to build perpetual universal translators, but we could come quite close since the required patterns exist as predicted.

[ illustration of a Babel Fish by John Matrz ]