the challenge of popular science writing
No good deed goes unpunished, including writing a science book that will get critiqued by scientists who aren't shy about getting pedantic.
Jonah Lehrer writes about popular neuroscience. He’s not a scientist and he had a moment in which he penned a bizarre article about science moving too slowly for his tastes, but he certainly knows how to read scientific studies and support his arguments with vast tracts of peer reviewed information, which is generally the key to being a good science writer. But not everyone was impressed with his last effort in describing how creativity works in the human mind. Psychologist Christopher Chabris decided to pound on his book so much so that Lehrer felt compelled to defend himself and triggered a growling back and forth on the web.
Usually, if you write a bad book, you’ll just have to live with it and defending said bad book could make you look rather badly to the public at large, but the problem is that Lehrer didn’t write a bad book. Because the book is about his area of expertise, Chabris feels that it’s his duty to be nitpicky and demanding, and takes his critiques to a completely unreasonable point. Had he written the book, it seems that for every page describing the finding of any particular study there would be no less than ten pages of caveats, questions, critiques, and gotchas, and another five devoted to summarizing every replication effort and how it did. Sounds like a fun read, huh?
Really, I absolutely get it, much of our knowledge about the human mind and how it works it provisional and a best guess from data that’s still only scratching the surface of what there is to discover. Hell, we’re still talking about why we sleep and wondering whether it supports neural scaling, a fascinating phenomenon described in detail by the Neuroskeptic in his guest post for a major pop sci magazine, and one that seems to have an interesting implication or two for AI researchers out there focused on artificial neural networks. Having done a few research projects in the AI realm you really develop an appreciation for the sheer amount of things we don’t understand yet see in front of our eyes every day.
But at the same time, we do know a good deal and we’re making strides towards finding out much, much more. Interesting work is done every day to unlock the brain’s mysteries, work with very practical applications in medicine, life extension, and social sciences. To either just overlook fascinating or eye-catching ideas because they’re provisional, or drown them out by going on and on about replication and supporting and detracting literature, makes for an unreadable story for those who are just interested in getting an overall idea of how the mind seems to work. We’re not trying to train new neuroscientists with popular science books and blog posts, we’re just trying to educate the curious.
I know, I know, I can also be a really nitpicky buzz kill, especially when it comes to the Singularity crowd, but all my ridicule is directed at egregious and fundamental mistakes and misunderstandings rather than trying with all my might to turn a mass publication into a proper scientific dissertation. Have you ever read a dissertation or a thesis? They’re usually peppered with enough jargon, diagrams, figures, tables, and schematics to send the heads of anyone who is not a grad student or a post-doc in the field spinning since they’re not written for a popular tome but for trained experts in the subject area. It’s bizarre that Chabris is applying a graduate school standard to a popular work and obsessing over any minor point he finds in Lehrer’s book, demanding pages upon pages of exhaustive summaries of replication efforts.
After all, do readers really need to know exactly how many other scientists conducted similar research and came up with similar results, or about every disagreement over an extremely technical point or statistical significance of a particular observed effect between five teams? No, not at all. All they need to know is how the experiment was done, what the results were, what those results mean, and whether this is a departure from what we thought we knew before and if so by how much. That’s already a lot of information to process for a curious layperson. Drowning them in minutia annoys them.
Usually this is when some scientists cough, sputter, and say “what do you mean ‘minutia?’ I’ve spent much of my life studying all this ‘minutia’ and wrote paper after paper about it! Of course it’s important!” And it is. To the other experts who study related minutia and combine their work into a comprehensive picture of the field. Just to use what I know as an example, there are computer scientists who devote all their time to the ins and outs of parallel processing, studying the best and most efficient algorithms for allocating tasks, spawning threads, and synchronizing the results.
For extremely complex tasks, I will read their work to figure out if I can get away with using a specialized parallel processing library or if I have to write extra code to tweak my threads to boost performance, or dynamically figure out when sequential execution is faster or if my system will really need to parallelize. You, as a user, don’t need to know or care about any of that. All you need to know is that we’re able to take multiple requests form you and do them side by side to get the information back to you faster so you’re aware that you can ask your IT team whether they could speed up a slow enterprise application that way. This process of keeping complex information irrelevant to you behind the scenes even has a computing principle named after it: encapsulation. This is basically what science writers do. They encapsulate the science. Want to learn more? You can always take a college class or two and see where that leads you…