Archives For popular science

magazine kiosk

When an expansive article on GMOs became the lead story in Elle Magazine, it wasn’t exactly a shocker that the story got its science wrong and horribly abused quotes to create a controversy where one didn’t exist. In fact, it’s par for the course when GMOs are mentioned in publications not known for their scientific reporting. Just like conservative political outlets go out of their way to deny global warming and denigrate the scientists involved in climate modeling, generally left-leaning lifestyle magazines do whatever they can to cast some doubt on the viability of GMOs in a noxious mix of conspiracy-mongering and double standards. No matter how many tests looking for potential allergens or toxins are done over decades, the anti-GMO pundits declare that there aren’t enough studies of the modified crops’ safety and surely this means that Monsanto turned millions of people into their unwitting guinea pigs for the sake of profit.

Meanwhile, even a single experiment which claims to find some sort of a problem with GMOs, no matter how horribly done and how much the researchers conduct it threaten reporters who want a second opinion or ask questions, has to be held up as the definitive proof that we’re all being slowly poisoned by greedy tycoons. The reality is quite different, of course. GMOs are actually strictly regulated, unlike organic food, since each new protein or genetic modification is treated as a food additive and has to be cleared by an independent panel of experts and by the FDA to ever hit the market. By contrast, anything described as "natural" and used in organic food does not have to be subject to any studies thanks to the codification of the naturalistic fallacy into law and despite the fact that nature can be very, very deadly. However, it’s not all regulations, good science, and securing the food supply. GMO makers use and abuse the patent system to milk a hefty profit from every stage of their products’ lifecycles and bilk farmers.

But don’t expect a discussion about the patent system and biology in Elle because the story isn’t so much about GMOs as about the author and her quest to rid herself of allergies, transitioning into a standard storyline of a woman in search of truth. Though by truth what I really mean is an exploratory trip into the land of conspiracy theories because that’s what the readers want. It’s a story written for the magazine’s target demographic, which is why it’s first person and focuses on vague, scary-sounding concerns to keep readers hooked. And this is why the admonition given to this article after a fact check sounds a bit silly to put it mildly, as it laments the science abuse and rampant misquotes to create a controversy for the sake of eyeballs…

It represents a major setback for science journalism, and for consumers who rely on hugely popular lifestyle publications to make their way through complicated issues. Is GMO corn causing allergies or other disorders? Are GMOs a threat? Elle perpetuates a “controversy” that just doesn’t exist in the mainstream science or medical communities. Worse, it fans the flames of doubt and distrust that fuel unilateral opposition to a sophisticated technology that could improve global food security.

Here’s the thing. If people are getting their science information from the same magazines which tell them what shoes are in this season, or what celebrity is working on what new movie, we have much bigger problems than are being highlighted here. Why would anyone think that relying on the latest edition of Vanity Fair, or Esquire, or, yes, Elle, for the latest and greatest in important, everyday science is a good idea? Certainly, one doesn’t expect fashion tips and celebrity gossip in their edition of National Geographic. Likewise, why would people rely on fashion magazines to navigate important policy debates? The really scary thing is that despite most people singing all manner of praises to science and a STEM education in popular surveys, they by in large do not care about the science that actually gets done or why, and even worse, don’t want to care. And considering that, is it any wonder that publications that cater to people who only say they care to be scientifically literate focus on creating controversy, peddling conspiracies, and moving copies to charge advertisers more? The Elle story is just one symptom of a much bigger issue…

[ photo illustration of news kiosk in Zurich via Wikimedia Commons ]

Share

exposed brain

Psychology has occasionally been called "the study of college undergraduates" and while that would usually be a joke in the psych department, a few writers are raising red flags that it’s too common of a practice and might be affecting the quality of the science. The study they chose to highlight? A survey trying to make the link between someone’s first sexual experience and what sexual activity follows based on 319 heterosexual college students who started having sex only about two years prior to the study and were asked to describe their intimate activities with some very positive and some very negative adjectives from a proscribed list. While the critics ask why the population was so homogeneous and the responses were so limited, this actually makes a lot of sense. If you’re not sure of your hypothesis, you want to have the most uniform samples you can find and limit inherently qualitative feedback into more quantitative form. From there on, you can test if the theory holds for more sexually experienced and diverse populations. So why are science writers harping on a perfectly legitimate, well done hypothesis fishing study?

Probably because it’s recent and it found that the students’ first sexual experience tended to be indicative of how they’d describe their future ones. And when limited to the population studied, it does make sense. Many of them are still relatively wet behind the ears and having finally had a real sexual encounter, they’re wondering what others will be like and comparing it to their first as they get more and more experience because it’s usually one of their few points of reference. At the same time, however, as the first experience fades into memory, new highlights come to take its place and a terrible first time gets forgotten in favor of the last mind-blowing experience and that might go on to color future encounters. We could also wonder about couples who lost their virginity to each other and haven’t had sex with anyone else. So why didn’t the researchers take cases like this into account? Well, they’d be outside the scope of the study, which basically just points out the obvious that yes, there’s a mental link between what you thought of your first time and your future preferences and expectations, as it applies to the sample population.

And that last phrase is really the crux of the matter because while human sexuality is so diverse and complex that questions about it could easily fuel centuries of studies and experiments, the pool of people willing to be studied is limited and the external factors they’d bring into the study makes it complicated to tease out complex and minute differences that might hint at something more, something that merits further research. College undergraduates are easy to recruit, easy to find close to the researchers’ labs, and fairly easy to homogenize, so they make for a simple, convenient set of test subjects in pilot studies. They’re a classic go-to convenience sample, and if you want to study special populations, you’ll go and study those special populations when you have the resources to do so. It’s just not fair to expect a narrow study to account for everything and use it a s springboard to pontificate on the limited utility of convenience sampling in basic psychology published for the public. And here the media has to take some heat as well.

How many pop sci writers just copy and paste the press release? How many of them wrote click bait headlines that sound as if an exhaustive study settled the question of just how special your first time is to you and what role it plays in your sex life? And how many of them trying their best to be contrarians put words in the researchers’ mouths and criticized them for making claims not actually made by the study? My guess? Quite a few. In fact, the links to a critical review of three other studies in the referenced critique were papers uncritically hyped by the media to become the viral stories they became. We can certainly argue about how much psychologists are relying on convenience samples of white, college educated students in the West, and what this does to the field as a whole. However, if the initial studies seem to be suffering from a bad sample or are way too limited to be applied outside of a very narrow socioeconomic group, the media klaxon is making the problem a hundred times worse. For writers to then wag their finger at the scientists, saying "tsk, tsk on your sampling techniques" without acknowledging that their colleagues have been running away with inconclusive and narrow studies for years is very disingenuous.

Share

circuit boards

A few years ago, when theoretical physicist Michio Kaku took on the future of computing in his thankfully short lived Big Think series, I pointed out the many things he got wrong. Most of them weren’t pedantic little issues either, they were a fundamental misunderstanding of not only the existing computing arsenal deployed outside academia, but the business of technology itself. So when the Future Tense blog put up a post from highly decorated computer expert Sethuraman Panchanathan purporting to answer the question of what comes after computer chips, a serious and detailed answer should’ve been expected. And there was one. Only it wasn’t a reply to the question that was asked. It was a breezy overview of brain-machine interfaces. Just like Kaku’s venture into the future of computing in response to a question clearly asked by someone whose grasp of computing is sketchy at best, Panchanathan’s answer was a detour that avoided what should’ve been done instead: an explanation of why the question was not even wrong.

Every computing technology not based on living things, a somewhat esoteric topic in the theory of computation we once covered, will rely on some form of a computer chip. It’s currently one of the most efficient ways we found of working with binary data and it’s very unlikely that we will be abandoning integrated circuitry and compact chips anytime soon. We might fiddle around with how they work on the inside making them probabilistic, or building them out of exotic materials, or even modifying them to read quantum fluctuations as well as electron pulses, but there isn’t a completely new approach to computing that’s poised to completely replace the good old chip in the foreseeable future. Everything Panchanathan mentions is based on integrating the signals from neurons with running currents through computer chips. Even cognitive computing for future AI models relies on computer chips. And why shouldn’t it? The chips give us lots of bang for our buck so asking "what comes after them" doesn’t make a whole lot of sense.

If computer chips weren’t keeping up with our computing demands and could not be modified to do so due to some law of physics or chemistry standing in the way, this question would be pretty logical, just like asking how we’ll store data when our typical spinning disk hard drives can’t read or write fast enough to keep up with data center demands and create unacceptable lag. But in the case of aging hard drive technology, we have good answers like RAID configurations and a new generation of solid state drives because these are real problems for which we had to find real solutions. But computer chips aren’t a future bottleneck. In fact they’re the very engine of a modern computer and we’d have to heavily add on to the theory of computing to even consider devices that don’t function like computer chips or whose job couldn’t be done by them. Honestly, I’m at a complete loss what these devices could be and how they could work. Probably the most novel idea I found was using chemical reactions to create logic gates, but it’s trying to improve a computer chip’s function and design, not outright replace it as the question implies.

Maybe we’re going a little too far with this. Maybe the person asking the questions really wanted to know about designs that will replace today’s CMOS chips, not challenge computation as most of us in the field know it. Then he could’ve talked about boron-enriched diamond, graphene, or graphene-molybdenum disulfide chips rather than future applications of computer chips in what are quite exciting areas of computer science all by themselves. But that’s the problem with a bad question by someone who doesn’t know the topic. We don’t know what’s really being asked and can’t give a proper answer. Considering that it originally came from a popular science and tech discussion though, makes answering it a political proposition. If instead of an answer you explain that the entire premise is wrong, you’ll risk coming across as patronizing, and making the topic way too complex for those whose expertise is not in your field. That may be why Panchanathan took a shot, though I really wish he tried to educate the person asking the question instead…

Share

national ignition facility

Generally, I don’t like making two posts linking to the same source back to back, but in the case of the egregious wall of sophomoric bile vomited by Charles Seife, I’m going to let myself make a rare exception. As written here many times before, making fusion energy a viable power source is hard. Really, really hard. It involves very complicated high energy physics that we’re only now starting to understand. When the first ideas for commercial fusion plants were just germinating, we didn’t have the technology or the knowledge base to accurately map out the challenges and as a result, as the machines, computers, and research advance, we’re only now starting to get a more accurate picture of what it would take to make industrial fusion work. But if you listen to the fact free rants of Seife, the only people supporting the idea of viable fusion are cranks, nutjobs, or naive futurists divorced from reality, and every research project from ITER to the NIF is ran by idiots who have no idea what they’re doing and exist only to waste taxpayer money.

While I’d love to tackle scientific arguments as to why this is the case, Seife presents exactly no factual reasoning behind his obnoxious and snide dismissals. The only science we get is in his critique of cold fusion — which, of course, lured LENR cranks to the comments — before which he presents Martin Fleischmann of Fleischmann and Pons fame as a leading fusion researcher whose zeal for fusion fueled the rest of the field apparently populated by idiots and cranks who convince gullible politicians to waste billions on their pipe dreams. This is like naming a random cancer quack who achieved notoriety with a failed experiment and then arguing that all oncology and basic cancer research is being done by ignoramuses just like him. Not only is this a childish and incredibly ignorant thing to do, but this should’ve alerted Slate’s editors to tell Seife that his column isn’t going to be published unless he can actually get his facts together rather than fume about money and politics and call every researcher in the field incompetent in what reads like an insult comic’s act on amateur night with the punchlines left out of the final product.

If Seife wants to call all of fusion research crap, it’s certainly his right to do so. But as he does, it becomes apparent that his entire argument boils down to "if you can’t make this work right now, you’re all a waste of space and this whole idea is impossible." I suppose this is an easier stance to take than figuring out that fusion research has been funded with a fraction of a fraction of the pittance that governments force themselves to give to basic science or actually studying how all of the proposed confinement and ignition methods work, as well as why milestones are delayed as energy levels go up and reaction times increase. Why bother with any of that when you could just act like a political talk show pundit? Nature doesn’t give a damn about your dreams, hopes, guidelines, or budgets. Basic research like fusion has a solid theory behind it and no amount of foaming at the mouth about time and money is going to make the theory any less solid. Likewise, no amount of unwarranted insults is going to make scientists discover things any harder. If a pop science writer doesn’t understand that, he doesn’t understand how science works.

Share

power equation

When you’re writing about science, two things tend to happen. One: you’ll attract self-appointed experts in the comments section who’ll tell you that the world will soon know that all modern science is wrong if people only read their brilliant new theories. Two: you’ll have to wade through a lot of jargon from your sources and distill it for readers who might have absolutely no idea what the research you’re covering or trying to review means. But it can be really, really difficult to accurately explain everything involved and more often than not, a lot of writers will just gloss over the terminology and stick to the very basics. And that doesn’t seem to sit well with some science writers, one of whom took to Nature to tell his colleagues not to fear jargon in their articles and encourage them to boldly use it. Unfortunately, his advice fails on two very important levels and if followed, would make for some very dry and difficult to follow articles.

Jargon has its uses. Experts don’t want to spend a few minutes or a few pages to identify every concept they use so they encapsulate it in a specialized term to save time and effort which would be unnecessarily wasted otherwise. However, those who aren’t experts won’t understand the full implications of what’s meant under the jargon, and without a thorough explanation, crucial parts of a story can get lost. By using jargon, you’re in effect propagating misunderstandings between the scientists and the public, or worse, glossing over your own inability to explain what’s going on in a particular experiment or paper. In fact, cranks and frauds often rely on a preponderance of jargon to dazzle their audiences into attentive submission because we all too often equate big words and complicated terminology with expertise.

There’s as good reason why physics legend Richard Feynman once supposedly said “If you can’t explain it to a six year old, you don’t really understand it.” The more you have to explain, the deeper you have to go, and the more group you have to cover in your explanations, the more likely you are to find weaknesses in your ideas and raise questions about their validity, and considering that science thrives with criticism, that’s a great thing. None of this means that a popular science article should turn into a textbook on whatever topic is being reviewed of course, otherwise, a report about a new gene sequencing technique would have to cover at least a year of undergraduate biology and stretch for hundreds of pages. But there should be a way to sufficiently explain what gene sequencing is and how it works in practice using everyday terms that would interest enough readers to keep reading, and perhaps a few of them to do their own research and start learning the jargon.

By throwing jargon at your readers you’re losing their attention because they don’t understand what you mean and either lose interest or get frustrated that it took them an hour to figure out that you were using a very technical term for a very specific type of neuron found only in one place of the eye, for example. And of course there’s also quite a bit of potential for abuse should flinging a lot of jargon become commonplace in popular science writing. Authors who need to turn in something by a deadline but unsure of what it is they’re actually trying to report or why it matters can just liberally sprinkle a lot of technobabble onto a page and call it done. Their story is in on time, but the net benefit to the readers is pretty much nil. So perhaps it’s a good idea that science writers try to avoid using a lot of jargon. It keeps everyone a little more honest about what they do and don’t know, and gives readers the opportunity to follow a story without talking over their heads or drowning them in a stream of technobabble.

Share

Jonah Lehrer writes about popular neuroscience. He’s not a scientist and he did have a moment in which he penned a bizarre article about science moving too slowly for his tastes, but he certainly knows how to read scientific studies and support his arguments with vast tracts of peer reviewed information, which is generally the key to being a good science writer. But not everyone was impressed with his last effort in describing how creativity works in the human mind. Psychologist Christopher Chabris decided to pound on his book so much so that Lehrer felt compelled to defend himself and triggered a growling back and forth on the web. Usually, if you write a bad book, you’ll just have to live with it and defending said bad book could make you look rather badly to the public at large, but the problem is that Lehrer didn’t write a bad book. Because the book is about his area of expertise, Chabris feels that it’s his duty to be nitpicky and demanding, and takes his critiques to a completely unreasonable point. Had he written the book, it seems that for every page describing the finding of any particular study there would be no less than ten pages of caveats, questions, critiques, and gotchas, and another five devoted to summarizing every replication effort and how it did. Sounds like a fun read, huh?

Really, I absolutely get it, much of our knowledge about the human mind and how it works it provisional and a best guess from data that’s still only scratching the surface of what there is to discover. Hell, we’re still talking about why we sleep and wondering whether it supports neural scaling, a fascinating phenomenon described in detail by the Neuroskeptic in his guest post for a major pop sci magazine, and one that seems to have an interesting implication or two for AI researchers out there focused on artificial neural networks. Having done a few research projects in the AI realm you really develop an appreciation for the sheer amount of things we do not understand yet see in front of our eyes every day. But at the same time, we do know a good deal and we’re making strides towards finding out much, much more. Interesting work is done every day to unlock the brain’s mysteries, work with very practical applications in medicine, life extension, and social sciences. To either just overlook fascinating or eye-catching ideas because they’re provisional, or drown them out by going on and on about replication and supporting and detracting literature, makes for an absolutely unreadable story for those who are just interested in getting an overall idea of how the mind seems to work. We’re not trying to train new neuroscientists with popular science books and blog posts, we’re just trying to educate the curious.

I know, I know, I can also be a really nitpicky buzz kill, especially when it comes to the Singularity crowd, but all my ridicule is directed at egregious and fundamental mistakes and misunderstandings rather than trying with all my might to turn a mass publication into a proper scientific dissertation. Have you ever read a dissertation or a thesis? They’re usually peppered with enough jargon, diagrams, figures, tables, and schematics to send the heads of anyone who is not a grad student or a post-doc in the field spinning since they’re not written for a popular tome but for trained experts in the subject area. It’s bizarre that Chabris is applying a graduate school standard to a popular work and obsessing over any minor point he finds in Lehrer’s book, demanding pages upon pages of exhaustive summaries of replication efforts. After all, do readers need to know how many other scientists conducted similar research and came up with similar results or about every disagreement over an extremely technical point or statistical significance of a particular observed effect between five teams? No, not at all. All they need to know is how the experiment was done, what the results were, what those results mean, and whether this is a departure from what we thought we knew before and if so by how much. That’s already a lot of information to process for a curious layperson. Drowning them in minutia simply annoys them.

Usually this is when some scientists cough, sputter, and say "what do you mean ‘minutia?’ I’ve spent much of my life studying all this ‘minutia’ and wrote paper after paper about it! Of course it’s important!" And it is. To the other experts who study related minutia and combine their work into a comprehensive picture of the field. Just to use what I know as an example, there are computer scientists who devote all their time to the ins and outs of parallel processing, studying the best and most efficient algorithms for allocating tasks, spawning threads, and synchronizing the results. For extremely complex tasks, I will read their work to figure out if I can get away with using a specialized parallel processing library or if I have to write extra code to tweak my threads to boost performance, or dynamically figure out when sequential execution is faster or if my system will really need to parallelize. You, as a user, don’t need to know or care about any of that. All you need to know is that we’re able to take multiple requests form you and do them side by side to get the information back to you faster so you’re aware that you can ask your IT team whether they could speed up a slow enterprise application that way. This process of keeping complex information irrelevant to you behind the scenes even has a computing principle named after it: encapsulation. This is basically what science writers do. They encapsulate the science. Want to learn more? You can always take a college class or two and see where that leads you…

Share

Science is apparently failing us. Rather than discovering new realms of possibility, it’s been reduced to using observational tools and computer models to take apart every individual process down to its most basic levels after which scientists simply assume they can make conclusions based on what molecules are involved, not how the entire system actually fits together because doing so would be too hard and expensive. At least that’s the dismal view of the scientific process to which we’re treated by Jonah Lehrer’s last month’s feature piece in Wired Magazine, starting with the saga of a failed statin designed to boost HDL cholesterol. The drug had the intended effect but with an unexpected increase in heart failure and potential heart attacks, which meant a punitive $21 billion drop on market value for Pfizer on top of the $1 billion sunk into research. Rather than take this failure to mean that something fiendishly complex was not yet known and has to be worked out in the lab, Lehrer uses it as a jump-off point to indict all scientists of focusing on the basics to such a fault that they lose the forest for the trees, and surmises they adopt such narrow perspectives due to their mental limitations.

As any science writer worth his salt, Lehrer tries to underpin his assertion with a study, in this case a study on how people tend to craft narratives based on visual cues, concluding that because humans look for cues that will tell let them build causal relationships between events and objects, we can get the story wrong. Well, yes, we certainly can, but how this supports the notion that scientists have now engaged in oversimplification isn’t exactly clear. Granted, the age of the polymath is over and scientific fields are so fiendishly complex that you’ll end up specializing in a branch of a branch for your entire research career and only the very rare few will get to explore beyond that. However, that doesn’t mean that no scientist will ever integrate any of the domain specific knowledge at higher levels and investigate how entire systems work. To use an example from my area, there isn’t all that much left to be mined from fine-tuning artificial neural networks because we’ve had the math for a number of them since the 1970s. The goal is how to make them grow and interact into large networks where discrete components grow and interact to become something more than just the sum of their parts, much the same way as astronomers studying stars and galaxies help feed models created by cosmologists.

Getting down to the basics is important because we need to know how each node in the system works before we can reassemble the whole thing and start affecting it with full knowledge of how every individual node may react to the changes. And just like Lehrer points out, that’s not an easy task. If you identify 20 components in a particular system, you could be looking at as many as 400 ways they may interact in just a preliminary sweep and testing all those interactions will take a lot of time and money. To acknowledge this fact and then strongly imply that scientists are just skipping this investigative step because it’s so expensive and time-consuming is not even wrong. And it’s even more outlandish to consider that scientists are now working with immense and complex, dynamic networks that stretch from the realm of molecules, to entire ecosystems is a failure since a discovery takes longer and tends to be less profound than say, the laws of gravity, or evolution, or genetic drift, since we’re now tackling a level of detail that would’ve been incomprehensible to any scientist working even a century ago. Science at its heart is about trial and error, and as we test more and more complex hypotheses, we’re bound to see the failure rate go up while a success opens the doors to more profound ideas and tools than ever before. If we’re always terrified of being wrong, how will we ever find what actually works?

[ illustration by Schuhle Lewis ]

Share

Nowadays, thanks to media policies and the abuse of internet podiums, we have a crank problem. Since so many of these cranks get lavished with attention and praise for saying what people want to hear, offering very simple solutions to very complex problems despite the fact that their oversimplifications will never work, we’re probably not going to be served well by encouraging them to keep at it as Margaret Wertheim does in her ode to the proud and arrogant know-nothing. According to her, it’s perfectly fine that a trailer park owner who had less than a semester of physics decided that he knew more then enough to crack cosmology once and for all while the rest of the physics world is still struggling with some fundamental questions. She’s also more than happy that over 2,000 similarly minded cranks created their own alternative symposium where they present a series of their latest random meanderings as serious alternatives to the work of tens of thousands of highly educated expertsand pats them on the head as hard-working mavericks. She couldn’t be more wrong.

You’re probably familiar with the now classic statistic that it takes about 10,000 hours of practice to become a genuine expert at something, and depending on the topic, it may take much more. However, the quality of this practice is also very important. If you spend 10,000 hours solving math problems incorrectly, you’re not going to become an expert mathematician by the end of your exercise, despite fancying yourself as such. But that is exactly that one of the primary characters in Wertheim’s story, the trailer park cosmologist Jim Carter, did with his education. The reason why Wertheim writes about him so prominently is because she’s trying to sell her book, in which Carter is the David taking on the Scientific Goliath. After taking a look at his utterly backwards description of gravity, I doubt the man has ever bothered to read the simplest, most layman friendly primers on general relativity. And yet he considers himself capable of tackling the biggest questions in physics. Now, were Wertheim not interest in promoting Carter’s crackpottery for financial gain, she could’ve used him as an example of arrogance of ignorance. But she does, feeding us with with pseudoscientific canards such as…

They are unanimous in the view that mainstream physics has been hijacked by a kind of priestly caste who speak a secret language – in other words, mathematics – that is incomprehensible to most human beings… In their militantly egalitarian opposition to what they see as a physics elite, [Natural Philosophy Alliance] members mirror the stances of Martin Luther. Luther was rebelling against the abstractions of the Latin-writing Catholic priesthood and one of his most revolutionary moves was to translate the Bible into vernacular German. Just as Luther declared that all people could read the book of God for themselves, the NPA today asserts that all of us ought to be able to read the book of nature for ourselves.

Hmm, not sure I’ve seen someone say "math is hard, screw it" in such grandiose terms and then defend this notion in the historical context of a religious schism. Luther was a believer whose opinions were based on a personal ideology and worldview so his split with the Catholics can be viewed as one worldview coming into conflict with another. Science is based on reams and reams of evidence and having random cranks whine at length about how complex this evidence is and how it must be just a way of keeping their brilliance out of the ivory towers so the scientists can keep all the Nobel Prizes to themselves, is not a conflict of worldviews. It’s a case of sour grapes and a textbook one at that. Put off by the amount of effort it takes to be a real physicist but desperately craving to understand how the universe works, they decided that the mathematics involved in the interpretation of the evidence and observations must be wrong anyway and the physicists are pretending that their data is somehow meaningful, just like the proverbial fox decided that the grapes out of its reach must’ve been rotten, otherwise they’d be low enough to pick. That’s not how science works. Sometimes it’s very hard, and it requires a lot of effort and study. Real scientists know their limitations and work hard to understand the fields they chose. Cranks substitute knowledge with conviction and volume, then go preach their gospel.

[ illustration from a promotional poster for Dr. Steel ]

Share

A recently trumpeted paper on astrobiology did some very interesting modeling in a search for places on Mars where some very tough terrestrial microorganisms could survive and came to a very surprising conclusion. It appears that some 3.2% of the red planet could be habitable by volume, which would make it more friendly to life than our seemingly idyllic world, a world which has been populated with countless living things for billions of years. Now, considering that the Martian surface is as inhospitable to life as it gets because it’s constantly bathed in radiation potent enough to kill even the most radiation resistant creatures we know to exist, all of this habitable alien real estate is underground, where the deadly rays can’t reach and the temperature and pressure are just right for liquid water to flow through porous rock. Good news, right? If we just dig enough, a future robot, or better yet, a human astrobiologist, should be able to find honest to goodness little aliens.

Yes, little green germs aren’t exactly the little green men of classic science fiction, but hey, at least they’ll be a real extraterrestrial organism and we’ll know for a fact that we’re not alone in the universe. If life could arise on two planets in the same solar system and might be swimming under miles of ice on a moon that looks like a better and better candidate for alien habitation every day, certainly the entire universe is teeming with all sorts of living things, right? Hold that thought. One of the big caveats of using these models as a definitive guide for alien hunting is the lack of detail. In their zeal to report a sensational story, most pop sci outlets just repeated the great statistic and used it as a tie in to Curiosity’s upcoming mission to track down where exactly Martian microbes would settle into a nice colony to call home. But the simulations merely looked at how far down into the red planet’s caves and rocks we could go and still find possible traces of liquid water. The question of an active, frequently stirred and replenished nutrient base for life to function was briefly mentioned in the paper’s disclaimers for future research, despite being the second main prerequisite for habitability.

Of course it’s perfectly fine for a scientific paper to focus on just one narrow question and leave tangents for a team interested in building up on its work. It’s only frustrating when a premise is obviously flimsy or just out of left field and all the important details are waived off as something for others to refine. But in this case, the pop sci news circuit neglected to mention that the authors only set out to see how far Martian rovers could keep on following the water, as per NASA’s strategy for finding life on the red planet, and reported their results as one, big, definitive model showing that Mars is actually more habitable to life than Earth by volume while all it really says is that under the Martian surface, liquid water should be quite plentiful if we extrapolate some models of our own subterranean conditions and ecology to our diminutive, red, desert cousin in the inner solar system, and does a fairly thorough job of establishing the reasoning behind this conclusion. The leap from where we could find water on Mars to declaring that the typically monolithic block known as "scientists" estimate that the caverns of Mars hold three times the habitable territory by volume than Earth from that conclusion was simply a sensationalistic over-exaggeration. We don’t know how truly hospitable to life Mars really is.

But all that said, Mars is a very promising target for extraterrestrial microbes and the curtain of radiation which makes life nearly impossible on its surface will actually aid in our search for them. As noted in the reference, leaving our equipment to soak up the powerful UV rays for a few hours would sterilize it and any biota found in caverns or after digging several dozen feet into the red soil is then extremely likely to be native rather than the forward contamination from our own world. And yes, that means we absolutely should go there and devote as many resources as possible to make walking on Mars a reality. Of course the R&D involved won’t only benefit astrobiologists since the necessary reactors, self-sustaining habitats, and treatments to combat the damage caused by constant exposure to radiation could generate tens of billions in revenues and profits for all of the companies involved in putting together the mission’s toolkits if they channel them into mass market products ranging from medical devices to infrastructure. Actually, come to think of it, maybe one of the best things we’d be able to do for the world’s fragile economy is to go on a hunt for some little green germs and test all the pop sci news friendly astrobiology papers like this one on the actual surface of another planet. We tried just about everything else at this point and it doesn’t seem to be working, so why not think outside the box for a bit?

See: Jones, E., Lineweaver, C., Clarke, J. (2011) An extensive phase space for the potential Martian biosphere Astrobiology DOI: 10.1089/ast.2011.0660

Share

Sometimes the part of popular science that’s the most difficult isn’t the science, but the word preceding it, the indication that what you’re writing should somehow be in the realm of popular interest. Sure, this often results in omissions and sensationalism in headlines and conclusions, but the point is that at least people do care, at least reading the articles and trying to think about scientific problems. But with some disciplines, it’s rather difficult to make something that ordinarily makes people snore exciting, and computer science is usually one of these topics, despite what the occasional robot demonstration would have you believe. Behind the scenes of an acrobatic flight or a coordinated robot shuffle are millions of lines of code, all pruned by people who will have to painstakingly analyze the time complexity of each algorithm, what tasks are being distributed to what processors, and how. This is why most robot sports are boring. The calculations take way too much time and while the Singularitarians and transhumanists are busy looking for robotic souls and machine bodies, rooms of the nerds they task with doing all the actual work, worry about time complexity and algorithm designs.

Hardly exciting huh? Were you to look at a computer science paper, you’d see a flood of discreet math in long and complex pseudocode, periodically followed by proofs of its estimated time complexity, a pseudocode that often tackles a problem that the vast majority of programmers really don’t have because they can always just scale up their hardware and cut down on the graphic components of the output. No need to implement some esoteric or complex workaround to boost performance by up to 15% when installing a new server could do an even better job of speeding up the application and eliminate the need to write a lot of new automated tests for the newly implemented code. Doesn’t exactly sound like front page material for Popular Science, right? Nerds argue about the best way to boost performance of applications running on a distributed network, click here for more details and to learn about asymptotic notation and proofs! Why do you think tech sites are a euphemism for gadget reviews and news, or breathless coverage of how some self-appointed grand guru of social media made a social startup that’s totally social and incubating social media friendly social spinoffs? Say, did I use the word “social” enough times in that sentence? It’s a really, really hot keyword for tech news searches, and my editor, who was just briefed by Huffington herself, said I need to use more keywords in my posts.

Or here’s another topic from actual computer science affairs. Using stored procedures to retrieve data from a database or an ORM, an object-relational mapping. Stored procedures are usually faster to execute and give you more control over what gets brought back since you write the SQL commands and they’re compiled in the database engine you’re using. Problem is that if you have a big system, you may find yourself writing lots and lots of stored procedures, and if you change database vendors, you might have to rewrite them all. ORM tools let you work with data from your code, but because the code is compiled in the application layer and goes to a database, retrieves the data you need, then sends it back, it imposes an overhead you could avoid with just a simple stored procedure. You’re also locking yourself into a tool to work with your database rather than a very basic management client which lets you customize what you want to do. Some programmers love abstracting the database mechanics into an ORM and feel relieved that they don’t have to write nearly as much SQL script in the future. Others like the control they get by specifying exactly what gets brought back and how, squeezing as much performance as we can out of existing tools, without buying new hardware or more bandwidth to let the processes of the ORM do their job without the slightest risk of network congestion, which can still happen even with lazy loading, a feature that tries to limit the ORM tools’ overhead during database hits.

Exciting, huh? But those are questions computer science tackles. Speed, performance, design, solving large and complex problems through an application of certain proven patterns and creating variations of an existing paradigm, often a slight tweak to make things easier to debug or code. Technophiles with their hearts set on the glorious future where machines a million times smarter than humans run everything should sit down with a computer scientist one day and talk about something as simple as mapping and let us know how close we are to the Nerd Rapture and the descent of the Great Machine after finding out how much it takes to teach any machine the difference between right and left. They don’t just memorize it like we do, they have to perform an elaborate set of simple calculations and a little trigonometry every time they’re faced with a fork in the road. To give a simple robot its bearings in a known environment takes hundreds of lines of code, code that’s parsed, scrutinized, and encoded in pages of asymptotic notations and pseudocodes, then possibly never used since it relies of some system or framework-specific trick to improve performance. And again, none of this is of any interest to the vast majority of popular science blog readers. It’s certainly of consequence, but the details just aren’t all that fun to discuss, and even for those who are amateur coders and who would certainly enjoy it, the academic formalism they’ll encounter and the quirks of their future workplaces which insist that something is to be done one way because that’s either the way they’ve always done it, or because it’s “a cutting edge tool a company on the cutting edge like us has to use,” take a toll on how excited they’ll be about what they do.

Share