why scientific success is hard to measure
Science is apparently failing us. Rather than discovering new realms of possibility, it’s been reduced to using observational tools and computer models to take apart every individual process down to its most basic levels after which scientists simply assume they can make conclusions based on what molecules are involved, not how the entire system actually fits together because doing so would be too hard and expensive. At least that’s the dismal view of the scientific process to which we’re treated by Jonah Lehrer’s last month’s feature piece in Wired Magazine, starting with the saga of a failed statin designed to boost HDL cholesterol. The drug had the intended effect but with an unexpected increase in heart failure and potential heart attacks, which meant a punitive $21 billion drop on market value for Pfizer on top of the $1 billion sunk into research. Rather than take this failure to mean that something fiendishly complex was not yet known and has to be worked out in the lab, Lehrer uses it as a jump-off point to indict all scientists of focusing on the basics to such a fault that they lose the forest for the trees, and surmises they adopt such narrow perspectives due to their mental limitations.
As any science writer worth his salt, Lehrer tries to underpin his assertion with a study, in this case a study on how people tend to craft narratives based on visual cues, concluding that because humans look for cues that will tell let them build causal relationships between events and objects, we can get the story wrong. Well, yes, we certainly can, but how this supports the notion that scientists have now engaged in oversimplification isn’t exactly clear. Granted, the age of the polymath is over and scientific fields are so fiendishly complex that you’ll end up specializing in a branch of a branch for your entire research career and only the very rare few will get to explore beyond that. However, that doesn’t mean that no scientist will ever integrate any of the domain specific knowledge at higher levels and investigate how entire systems work. To use an example from my area, there isn’t all that much left to be mined from fine-tuning artificial neural networks because we’ve had the math for a number of them since the 1970s. The goal is how to make them grow and interact into large networks where discrete components grow and interact to become something more than just the sum of their parts, much the same way as astronomers studying stars and galaxies help feed models created by cosmologists.
Getting down to the basics is important because we need to know how each node in the system works before we can reassemble the whole thing and start affecting it with full knowledge of how every individual node may react to the changes. And just like Lehrer points out, that’s not an easy task. If you identify 20 components in a particular system, you could be looking at as many as 400 ways they may interact in just a preliminary sweep and testing all those interactions will take a lot of time and money. To acknowledge this fact and then strongly imply that scientists are just skipping this investigative step because it’s so expensive and time-consuming is not even wrong. And it’s even more outlandish to consider that scientists are now working with immense and complex, dynamic networks that stretch from the realm of molecules, to entire ecosystems is a failure since a discovery takes longer and tends to be less profound than say, the laws of gravity, or evolution, or genetic drift, since we’re now tackling a level of detail that would’ve been incomprehensible to any scientist working even a century ago. Science at its heart is about trial and error, and as we test more and more complex hypotheses, we’re bound to see the failure rate go up while a success opens the doors to more profound ideas and tools than ever before. If we’re always terrified of being wrong, how will we ever find what actually works?