Archives For neuroscience

math prodigy

According to overenthusiastic hacks at Wired, scientists have recently developed a way to scan your brain to predict just how intelligent someone is or how good you’ll be at certain tasks. This sounds like the beginning of a dystopian nightmare, rather than an actual field of research, that will end up with mandatory brain scans for everyone to “facilitate an appropriate job function” in some dark, gray lab in front of medical paper pushers, true. But it only sounds like this because the writer is more interested in page views than the actual study, which really has nothing to do with one’s intelligence but actually tested whether you could identify someone by scanning how this person’s brain is wired. Rather than trying to develop IQ tests in a box, the researchers put the theory that your brain wiring is so unique that getting a map of it could identify you every bit as well as a fingerprint, to the test. Not surprisingly, they found that a high quality fMRI scan of your brain at work performing some standard tests can definitely be used to identify you.

All right, that’s all fine and well, after all, the fMRI scan is basically giving you insight into unique personalities, and no two people’s brains will work the same way. But where exactly would this whole thing about measuring intelligence come into play? Well, the concept of fluid intelligence, mentioned only three times in the study, was brought up as an additional avenue of research in light of the findings and revolves around the idea that certain parts of the brain having a strong connection will make you notably better at making inferences to solve new problems. Unlike its counterpart, crystallized intelligence (called Gc in neuroscience), fluid intelligence (or Gf) is not what you know, but how well you see patterns and come up with ideas. Most IQ tests today are heavily focused on Gf because it’s seen as a better measure of intelligence and the elaboration on what exactly the fingerprinting study had to do with predicting Gf was an extended citation of a study from 2012 which found a link between the lateral prefrontal cortex’s wiring to the rest of the brain and performance standardized on tests designed to measure Gf in 94 people.

Here’s the catch though. Even though how well your lateral prefrontal cortex talks to the rest of your brain does account for some differences in intelligence, much like your brain size, it really only explains 5% of these differences. Current theory holds that because your prefrontal cortex functions as your command and control center, what Freud described as the ego, a strong link between it and several other important parts of the brain will keep you on task and allow you to problem-solve more efficiently. Like a general commanding his troops, it makes sure that every other relevant part of your mind is fully engaged with the mission. But even if that theory is right and your preforntal cortex is well wired in a larger than median brain, close to 90% of what you would score on an IQ test can come down to level of education and other factors that generally make household income and education a better predictor of IQ scores than biology. Although in many ways it’s not that accurate either because style of learning and culture also play a role. All we can conclude is that the interplay between Gf, Gc, and education is very complex.

We should also take note of one study of popular theories of biological contributors to Gf which spanned 44,600 people and found no evidence that a combination of fMRI maps has predictive power when it comes to IQ points. In other words, we have a lot of ideas that seem plausible as to the biological origins of intelligence, but because our brains are very plastic, we are not all on a level playing field when it comes to the amount and quality of education we receive, and even our longest-running efforts for accurate Gc assessments have shown that we’re really bad at it, studies that claim predictive powers when it comes to our IQs using brain scans of 100 college students or fewer are extremely likely overselling their results. Not only that, but even when the studies do actively oversell, they still claim to explain only a tiny fraction of the score differences because they recognize how small and homogeneous their data sets really are. Not only do we not have an fMRI based tests for intelligence, we’re not even sure it’s possible. But those facts bring in far, far fewer page views than invoking kafkaesque sci-fi lore in a pop sci post…


Whenever I write a post about why you can’t just plug a human brain or a map of it into a future computer and expect to get a working mind as a result, two criticisms inevitably get sent to both my inbox and via social media. The first says that I’m simply not giving enough credit to a future computer science lab because the complexity of a task hasn’t stopped us before and it certainly won’t stop us again. The second points to a computer simulation, such as the recent successful attempt to recreate a second of human brain activity, and say it’s proof that all we need is just a little more computing oomph before we can create a digital replica of the human brain. The first criticism is a red herring because it claims that laying out how many proponents of this idea are severely underestimating the size and scope of the problem is the equivalent of saying that it’s simply too hard to do, while the actual argument is that brains don’t work like computers, and to make computers work more like brains can only get you so far. The second criticism, however, deserves a more in-depth explanation because it’s based on a very hard to spot mistake…

You see, we can simulate how neurons work fairly accurately based on what we know about all the chemical reactions and electrical pulses in their immediate environment. We can even link a lot of them together and see how they’ll react to virtual environments to test our theories of the basic mechanics of the human brain and generate new questions to answer in the lab. But this isn’t the same thing as emulating the human brain. If you read carefully, the one second model didn’t actually consider how the brain is structured or wired. It was a brute force test to see just how much power it should take for a typical modern computer architecture to model the human brain. And even if we provide a detailed connectome map, we’ll just have a simulated snapshot frozen in time, giving us mathematical descriptions of how electrical pulses travel. We could use that to identify interesting features and network topologies, but we can’t run it forward, change it in response to new stimuli at random, and expect that a virtual mind resembling that of the test subject whose brain was used would suddenly come to life and communicate with us.

android mind

For those who are convinced that one day we can upload our minds to a computer and emulate the artificial immortality of Ultron in the finest traditions of comic book science, there’s a number of planned experiments which claim to have the potential to digitally reanimate brains from very thorough maps of neuron connections. They’re based on Ray Kurzweil’s theory of the mind; we are simply the sum total of our neural network in the brain and if we can capture it, we can build a viable digital analog that should think, act, and sound like us. Basically, the general plot of last year’s Johnny Depp flop Transcendence wasn’t built around something a room of studio writers dreamed up over a very productive lunch, but on a very real idea which some people are taking seriously enough to use it to plan the fate of their bodies and minds after death. Those who are dying are now finding some comfort in the idea that they can be brought back to life should any of these experiments succeed, and reunite with the loved ones who they’re leaving behind.

In both industry and academia, it can be really easy to forget that the bleeding edge technology you study and promote can have a very real effect on very real people’s lives. Cancer patients, those with debilitating injuries that will drastically shorten their lives, and people whose genetics conspired to make their bodies fail them, are starting to make decisions based on the promises spread by the media on behalf of self-styled tech prophets. For years, I’ve been writing a lot of posts and articles explaining exactly why many of these promises are poorly formed ideas that lack the requisite understanding of the problem they claim to understand how to solve. And it is still very much the case, as neuroscientist Michael Hendricks felt compelled to detail for MIT in response to the New York Times feature on whole brain emulation. His argument is a solid one, based on an actual attempt to emulate a brain we understand inside and out in an organism we have mapped from its skin down to the individual codon, the humble nematode worm.

Essentially, Hendricks says that to digitally emulate the brain of a nematode, we need to realize that its mind still has thousands of constant, ongoing chemical reactions in addition to the flows of electrical pulses through its neurons. We don’t know how to model them and the exact effect they have on the worm’s cognition, and even with the entire immaculately accurate connectome at hand, he’s still missing a great deal of information on how to start emulating its brain. But why should we have all the information, you ask, can’t we just build a proper artificial neural network reflecting the nematode connectome and fire it up? After all, if we know how the information will navigate its brain and what all the neurons do, couldn’t we have something up and running? To add on to Hendricks’ argument that the structure of the brain itself is only a part of what makes individuals who they are and how they work, allow me to add that this is simply not how a digital neural network is supposed to function, despite being constantly compared to our neurons.

Artificial neural networks are mechanisms to implement a mathematical formula for learning an unfamiliar task in the language of propositional logic. In essence, you define the problem space and the expected outcomes, then allow the network to weigh the inputs and guess its way to an acceptable solution. You can say that’s how our brains work too, but you’d be wrong. There are parts of our brain that deal with high level logic, like the prefrontal cortex which helps you make decisions about what to do in certain situations, that is, deal with executive functions. But unlike artificial neural networks, there are countless chemical reactions involved, reactions which warp how the information is being processed. Being hungry, sleepy, tired, aroused, sick, happy, and so on, and so forth, can make the same set of connections produce different outputs from very similar inputs. Ever had an experience of being asked to help a friend with something until one day, you got fed up that you were being constantly pestered for help, started a fight, and ended the friendship? Humans do that. Social animals can do that. Computers never could.

You see, your connectome doesn’t implement propositional calculus, it’s a constantly changing infrastructure for exchanging basic functionality, deeply affected by training, injury, your overall health, your memories, and the complex flow of neurotransmitters floating between neurons. If you bring me a connectome, even for a tiny nematode, and told me to set up an artificial neural network that captures these relationships, I’m sure it would be possible to draw up something in a bit of custom code, but what exactly would the result be? How do I encode plasticity? How do we define each neuron’s statistical weight if we’re missing the chemical reactions affecting it? Is there a variation in the neurotransmitters we’d have to simulate as well, and if so, what would it be and to which neurotransmitters will it apply? It’s like trying to rebuild a city with only the road map, no buildings, people, cars, trucks, and businesses included, then expecting artificial traffic patterns to recreate all the dynamics of the city the road map of which you digitized, with pretty much no room for entropy because it could easily break down the simulation over time. You will both be running the neural network and training it, something it’s really not meant to do.

The bottom line here is that synthetic minds, even once capable of hot-swapping newly trained networks in place of existing ones, are not going to be the same as organic ones. What a great deal of transhumanists refuse to accept is that the substrate in which computing — and they will define what the mind does as computing — is being done, is actually quite important because it allows the information to flow at different rates and in different ways than another substrate. We can put something from a connectome into a computer, but what comes out will not be what we put into it, it will be something new, something different because we put in just a part of it into a machine and naively expected the code to make up for all the gaps. And that’s for a best case scenario with a nematode and 302 neurons. Humans have 86 billion. Even if we don’t need the majority of these neurons to be emulated, the point is that whatever problems you’ll have with a virtual nematode brain, they will be more than nine orders of magnitude worse in virtual human ones, as added size and complexity create new problems. In short, whole brain emulation as a means for digital immortality may work in comic books, but definitely not in the real world.


A long time ago, I shared one of my favorite jokes about philosophers. It went like this. Once, a president of a large and prestigious university was asked who were his most expensive staff to fund. "Phycisists and computer scientists," he replied without hesitation, "they always want some brand new machine that costs a fortune to build and operate, not like mathematicians who only need paper, pencils, and erasers. Or better yet, my philosophers. Those guys don’t even need the erasers!" Yes, yes, I know, I’m a philosophical phillistine, I’ve been told of this so many times that I should start some sort of contest. But my lack of reverence for the discipline is not helped by philosophers who decide to speak up for their occupation in an age of big data and powerful, new tools for scientific experimentation to propose answers to new and ever more complex real world questions. Case in point, a column by Raymond Tallis declaring that physics is broken so much so that it needs metaphysics to pull itself back together and produce real results.

Physics is a discipline near and dear to my heart because certain subsets of it can be applied to cutting edge hardware, and as someone whose primary focus is distributed computing, the area of computer science which gives us all our massive web applications, cloud storage, and parallel processing, there’s a lot of value in keeping up with the relevant underlying science. And maybe there’s already an inherent bias here when my mind starts to wonder how metaphysics will help someone build a quantum cloud or radically increase hard drive density, but the bigger problem is that Tallis doesn’t seem to have any command of the scientific issues he declares to be in dire need of graybeards in tweed suits pondering the grand mechanics of existence with little more than the p’s and q’s of propositional logic. For example, take his description of why physics has chased itself into a corner with quantum mechanics…

A better-kept secret is that at the heart of quantum mechanics is a disturbing paradox – the so-called measurement problem, arising ultimately out of the Uncertainty Principle – which apparently demonstrates that the very measurements that have established and confirmed quantum theory should be impossible. Oxford philosopher of physics David Wallace has argued that this threatens to make quantum mechanics incoherent which can be remedied only by vastly multiplying worlds.

As science bloggers love to say, this isn’t even wrong. Tallis and Wallace have mixed up three very different concepts into a grab bag of confusion. Quantum mechanics can do very, very odd things that seem to defy the normal flow of time, but there’s nothing that says we can’t know the general topology of a quantum system. The oft cited and abused Uncertainty Principle is based on the fact that certain fundamental building blocks of the universe can function as both a wave and a particle, and each state has its own set of measurements. If you try to treat the blocks as particles, you can measure the properties of the particle state. If you try to treat them as waves, you can only measure the properties of the waves. The problem is that you can’t get both at the same exact time because you have to choose which state you measure. However, what you can do is create a wave packet, where you should get a good, rough approximation of how the block behaves in both states. In other words, measurement of quantum systems is very possible.

All right, so this covers the Uncertainty Principle mixup, what about the other two concepts? The biggest problem in physics today is the lack of unification between the noisy quantum mechanics on the subatomic scale and the ordered patterns of general relativity. String theory and the very popular but nearly impossible to test many worlds theory tries to explain the effects of the basic forces that shape the universe on all scales in terms of different dimensions or leaks from other universes. So when Tallis says that it’s still 40 years and we don’t know which one is right, then piles on his misunderstanding of quantum mechanics on top of Wallace’s seeming inability to tell the difference between multiverses and string theory, he ends up with the mess above. We get a paradox where there isn’t one and scope creep from particle physics into cosmology. Not quite a ringing endorsement of philosophy in physics so far. And then Tallis makes it worse…

The attempt to fit consciousness into the material world, usually by identifying it with activity in the brain, has failed dismally, if only because there is no way of accounting for the fact that certain nerve impulses are supposed to be conscious (of themselves or of the world) while the overwhelming majority (physically essentially the same) are not. In short, physics does not allow for the strange fact that matter reveals itself to material objects (such as physicists).

Again, a grab bag of not even wrong is supposed to sell us on the idea that a philosopher could help where our tools are pushed to their limits. Considering that Tallis dismisses the entire idea that neuroscience as a discipline has any merit, no wonder that he proclaims that we don’t have any clue of what consciousness is from a biological perspective. The fact is that we do have lots of clues. Certain patterns of brain activity are strongly associated with a person being aware of his or her environment, being able to meaningfully interact, and store and recall information as needed. It’s hardly the full picture of course, but it’s a lot more than Tallis thinks it is. His bizarre claim that scientists consider some nerve pulses to be conscious while the majority are said not to be is downright asinine. Just about every paper on the study of the conscious mind in a peer reviewed, high quality journal refer to consciousness as a product of the entire brain.

The rest of his argument is just a meaningless, vitalist word salad. If brain activity is irrelevant to consciousness, why do healthy living people have certain paterns while those who had massive brain injuries have different ones depending on the site of injury? Why do all those basic brain wave patterns repeat again and again in test after test? Just for the fun of seeing themselves on an EEG machine’s output? And what does it mean that it’s a surprising fact that we can perceive matter around us? Once again, hardly a serious testament to the usefulness of philosophers in science because so far all we got is meaningless questions accusing scientists of being unable to solve problems that aren’t problems by using a couple of buzzwords incorrectly, haphazardly cobbling bits of pieces of different theories into an overreaching statement that initially sounds well researched, but means pretty much nothing. Well, this is at least when we don’t have Tallis outright dismissing the science without explaining what’s wrong with it…

Recent attempts to explain how the universe came out of nothing, which rely on questionable notions such as spontaneous fluctuations in a quantum vacuum, the notion of gravity as negative energy, and the inexplicable free gift of the laws of nature waiting in the wings for the moment of creation, reveal conceptual confusion beneath mathematical sophistication.

Here we get a double whammy of Tallis getting the science wrong and deciding that he doesn’t like the existing ideas because they don’t pass his smell test. He’s combining competing ideas to declare them inconsistent within a unified framework, seeingly unaware that the hypotheses he’s ridiculing aren’t complimentary by design. Yes, we don’t know how the universe was created, all we have is evidence of the Big Bang and we want to know exactly what banged and how. This is why we have competing theories about quantum fluxes, virtual particles, branes, and all sorts of other mathematical ideas created in a giant brainstorm, waiting to be tested for any hint of a real application to observable phenomena. Pop sci magazines might declare that math proved that a stray quantum particle caused the Big Bang or that we were all vomited out by some giant black hole, or are living in the event horizon of one, but in reality, that math is just one idea. So yes, Tallis is right about the confusion under the algebra, but he’s wrong about why it exists.

And here’s the bottom line. If the philosopher trying to make the case for this profession’s need for inclusion into the realms of physics and neuroscience doesn’t understand what the problems are, what the fields do, and how the fields work, why would we even want to hear how he could help? If you read his entire column, he never does explain how, but really, after all his whoppers and not even wrongs, do you care? Philosophers are useful when you want to define a process or wrap your head around where to start your research on a complex topic, like how to create an artificial intelligence. But past that, hard numbers and experiments are required to figure out the truth, otherwise, all we have are debates about semantics which at some point may well turn into questions of what it means to exist in the first place. Not to say that this last part is not a debate worth having, but it doesn’t add much to a field where we can actually measure and calculate a real answer to a real question and apply what we learn to dive even further.


According to a widely reported paper by accomplished molecular geneticist Jerry Crabtree, the human species is getting ever less intelligent because our society removed the selective drives to nurture intelligence and get rid of mutations that can make us dumber. This is not a new idea by any means, in fact it’s been a science fiction trope for many years and had it’s own movie to remind us of the gloom and doom that awaits us if we don’t hit the books: Idiocracy. Crabtree’s addition to it revolves around some 5,000 genes he identified as playing a role in intelligence by analyzing the genetic roots of certain types of mental retardation. Then, he posits that because we tend to live in large, generally supportive communities, we don’t have to be very smart to get to a reproductive age and have plenty of offspring. Should mutations that make us duller rear their ugly heads in the next few thousand years, there’s no selective pressure to weed them out because the now dumber future humans will still be able to survive and reproduce.

Evolution does have its downsides, true, but Crabtree ignores two major issues with his idea of humanity’s evolutionary trajectory. The first is that he ignores beneficial mutations and that just two or three negative mutations won’t necessarily stunt our brains. Geneticists who reviewed his paper and decided to comment say that Crabtree’s gloom and doom just isn’t warranted by the evidence he presents, and that his statistical analysis leaves a lot to be desired. The second big issue, one that I haven’t yet seen addressed, is that Crabtree doesn’t seem to have any working definition of intelligence. These are not the days of eugenicists deluding themselves about their genetic superiority to all life on Earth and most scientifically literate people know that survival of the fittest wasn’t Darwin’s description of natural selection, but a catchphrase created by Herbert Spencer. Natural selection is the survival of the good enough in a particular environment, so we could well argue that as long as we’re smart enough to survive and reproduce, we’re fine.

This means that Crabtree’s description of us being intellectual inferiors of our ancient ancestors is at best, irrelevant and at worst pointless. However, it’s also very telling because it fits so well with the typical assessment of modern societies by eugenicists. They look at the great names in history, both scientific and creative, and wonder where our geniuses are. But they forget that we do have plenty of modern polymaths and brilliant scientists and that in Newton’s day, the typical person was illiterate and had no idea that there was such a thing as gravity or optics and really couldn’t be bothered to give less of a damn. Also, how do we define genius anyway? With an IQ test? We know those only measure certain pattern recognition and logic skills and anyone could learn how to score highly on them with enough practice. You can practice test your way to be the next Mensa member so you can talk about being in Mensa and how high your IQ scores were, which in my experience tend to be the predominant activities of Mensa members. But they are members of an organization created to guide us dullards to a better tomorrow after all…

But if IQ scores are a woefully incomplete measure of intelligence, what isn’t? Depends on who’s doing the measuring and by what metric. One of the most commonly cited factoids from those in agreement with Crabtree is how much time is being spent on Facebook an watching reality TV instead of reading the classics and inventing warp drives or whatnot. But is what we usually tend to call book smarts necessary for survival? What we consider to be trivial knowledge for children today was once considered the realm of brilliant, highly educated nobles. Wouldn’t that make us smarter than our ancestors because we’ve been able to parse the knowledge they accumulated to find the most useful and important theories and ideas, disseminate them to billions, and make things they couldn’t have even imagined in their day? How would Aristotle react to a computer? What would Hannibal think of a GPS? Would the deleterious genetic changes Crabtree sees as an unwelcome probability hamper our ability to run a society, and if so, how?

Without knowing how he views intelligence and how he measures it, all we have is an ominous warning and one that single-mindedly focuses only on potential negatives rather than entertain potential positives alongside them, and making conclusions about their impact on a somewhat nebulous concept which isn’t defined enough to support such conclusions. In fact, the jury is still out on how much intelligence is nature and how much is nurture, especially when we consider a number of failed experiments with designer babies who were supposed to be born geniuses. We can look at families of people considered to be very intelligent and note that they tend to have smart kids. But are the kids smart because their parents are smart or because they’re driven to learn by parents who greatly value academics? We don’t know, but to evolution, all that matters is that these kids secure a mate and reproduce. To look for selection’s role beyond that seems more like an exercise in confirmation bias than a scientific investigation into the origins of human intelligence. That research is much more complex and elaborate than gene counting…

crysis cyborg

Ray Kurzweil, the tech prophet reporters love to quote when it comes to our coming immortality courtesy of incredible machines being invented as we speak, despite his rather sketchy track record of predicting long term tech trends, has a new book laying out the blueprint for reverse-engineering the human mind. You see, in Kurzwelian theories, being able to map out the human brain means that we’ll be able to create a digital version of it, doing away with the neurons and replacing them with their digital equivalents while preserving your unique sense of self. His new ideas are definitely a step in the right direction and are much improved from his original notions of mind uploading, the ones that triggered many a back and forth with the Singularity Institute’s fellows and fans on this blog. Unfortunately, as reviewers astutely note, his conception of how a brain works on a macro scale is still simplistic to a glaring fault, so instead of a theory of how an artificial mind based off our brains should work, he presents vague, hopeful overviews.

Here’s the problem. Using fMRI we can identify what parts of the brain seem to be involved in a particular process. If we see a certain cortex light up every time we’re testing a very specific skill in every test subject, it’s probably a safe bet that this cortex has something to do with the skill in question. However, we can’t really say with 100% certainty that this cortex is responsible for this skill because this cortex doesn’t work in a vacuum. There are hundreds of billions of neurons in the brain and at any given time, 99% of them are doing something. It would seem bizarre to get the sort of skin-deep look that fMRI can offer and draw sweeping conclusions without taking the constantly buzzing brain cells around an active area into account. How involved are they? How deep does a particular thought process go? What other nodes are involved? How much of that activity is noise and how much is signal? We’re just not sure. Neurons are so numerous and so active that tracing the entire connectome is a daunting task, especially when we consider that every connectome is unique, albeit with very general similarities across species.

We know enough to point to areas we think play key roles but we also know that areas can and do overlap, which means that we don’t necessarily have the full picture of how the brain carries out complex processes. But that doesn’t give Kurzweil pause as he boldly tries to explain how a computer would handle some sort of classification or behavioral task and arguing that since the brain can be separated into sections, it should also behave in much the same way. And since a brain and a computer could tackle the problem in a similar manner, he continues, we could swap out a certain part of the brain and replace it with a computer analog. This is how you would tend go about doing something so complex in a sci-fi movie based on speculative articles about the inner workings of the brain, but certainly not how you’d actually do that in the real world where brains are messy structures that evolved to be good at cognition, not to be compartmentalized machines with discrete problem-solving functions for each module. Just because they’ve been presented as such on a regular basis over the last few years, doesn’t mean they are.

Reverse-engineering the brain would be an amazing feat and there’s certainly a lot of excellent neuroscience being done. But if anything, this new research shows how complex the mind really is and how erroneous it is to simply assume that an fMRI blotch tells us the whole story. Those who actually do the research and study cognition certainly understand the caveats in the basic maps of brain function used today, but lot of popular, high profile neuroscience writers simply go for broke with bold, categorical statements about which part of the brain does what and how we could manipulate or even improve it citing just a few still speculative studies in support. Kurzweil is no different. Backed with papers which describe something he can use in support for his view of the human brain of being just an imperfect analog computer defined by the genome, he gives his readers the impression that we know a lot more than we really do and can take steps beyond those we can realistically take. But then again, keep in mind that Kurzweil’s goal is to make it to the year 2045, when he believes computers will make humans immortal, and at 64, he’s certainly very acutely aware of his own mortality, and needs to stay optimistic about his future…

jason mask

Here’s a fun fact for you. If you zap someone with a powerful enough magnetic field, you could change this person’s behavior and not always for the best. In fact, you could even zap someone into a state of cold, callous sociopathy if you know where to aim, at least for a short while. Yes, the effects do wear off, but it does seem perfectly plausible that the same effect could be easily harnessed and prolonged by a chemical cocktail and we’ve known that behavior can be altered with the right tools. So of course conspiracy theorists around the world were wondering if sinister military officers or politicians with little concern for their fellow humans would start injecting some people with a psychopath-killer-in-a-syringe serum and setting them loose on a battlefield to do unspeakable evil, acting as shock troops before or during an invasion. The answer is twofold. In theory, yes, they could. In practice, the results would vary widely and can easily backfire and we already have plenty of sociopaths available for building a small army of shock troops. Just ask the Pakistani ISI if you’re curious, and while you’re at it, ask how well it’s worked for them…

Basically, the issue here is that there are limits to which you can change someone’s behavior as well as for how long. In the article above, the subject feels less empathetic and inhibited, but his psychopathy only extends to taking more risks in a video game and pocketing an uncollected tip which he promptly pays back after returning to normal. His comparison point is a special forces soldier who had extensive training and whose skills were honed in real wars. This doesn’t tell us much because military training is a major variable that’s overlooked in such stories. How likely is our non-military test subject to injure or kill someone in a real fight? Probably not very, and here is why. If you ever take a martial arts class, you’ll spend the first few weeks apologizing if you do manage to land a punch on your sparring partner and the instructors will yell at you for going far too easy on your blows and tackles. You’ll shy away from jabs and your natural instinct will be to flinch or fall back when attacked, not to calmly stand your ground. Humans are social creatures and they tend to be averse to hurting each other in the vast majority of cases.

True, we can be induced into hurting others with money or threats, and we do know how to train someone not to shy away from fights and to overcome the natural aversion to real violence. But the experimental subject in question appears to have never had any combat training or martial arts background. He may be less averse to getting into a fight because his impulse control was radically lowered, but chances are that he’ll run for it if he picks a fight with someone who’s able to hold his own or when he realizes that he’s about to get hurt. Likewise, he’s unlikely to punch as hard or as accurately as someone who’s had some real training. All in all, he may be a major menace to unwatched tips in a bar and in Grand Theft Auto, but he’s most probably not a threat to flesh and blood humans. His former special forces friend? Absolutely, but he seems to have no need to be zapped into an emotionally detached state and has his impulses pretty well under control. On top of that, were we to just zap or drag a random person into a psychopathic malice, there’s simply no telling whether he would turn on his friends and handlers or not, a chance no evil, self-respecting mastermind of the New World Order would want to take.

And that brings us back to the very real problem of an abundance of psychopaths to do a dirty job for someone willing to pay. Just look at what happened in Afghanistan during and soon after the Soviet occupation. The mujahedeen trained to fight a guerilla war against the Red Army as well as become proxy shock troops for the ISI in a potential war with India, were not given drugs or magnetic bursts to the brain. They were recruited based on their religious convictions, trained to channel their loathing for the occupying infidels into violence, and let loose on Soviet troops. No artificial inducement or neural intervention was even needed. Today they quire regularly turn on their former handlers, kill people who displease them with near impunity and absolutely zero question or moral qualms, and have generally proved to be a far bigger threat and liability than an asymmetric military asset. Considering that real psychopaths are so dangerous, why create an entire army of them with experimental chemicals or magnetic beams? If indiscriminate murder is your goal, fully automated robots are the easiest way to go, not average people or soldiers just out of basic with their impulse control drugged and zapped out of existence…


Skeptics and vocal atheists across the web fumed when Newsweek published a cover story that proclaimed the afterlife to be real based on a firsthand account of a neurosurgeon who nearly lost his bout with meningitis. His tale is hardly atypical from ones we’ve heard many times before across a wide variety of patients who had one foot in the grave and were revived; lush greenery and white fluffy clouds leading to a wonderful and peaceful place, a companion of some sort for what looked like a guided tour of Heaven, all the pieces are there. Such consistency is used by the faithful to say that there must be an afterlife. How else could the stories be so consistent and feature the same elements? If the patients were simply hallucinating as their brains were slowly but surely shutting down, wouldn’t their experiences be radically different? And aren’t a number of them extremely difficult to explain with what we know about how the brain functions?

It’s not as if people could sense when they’re about to die and are constantly bombarded with a description of how they should ascend to Heaven for eternal peace and rest. Wait a minute, wait a minute… They can and they are. So wouldn’t it make sense that so many near death accounts of an ascension to an afterlife follow the same pattern because the patients who remember their alleged journey to the great beyond are told day in, day out how this pattern should go? Most of the tales we get come from the Western world and have a very heavy Judeo-Christian influence coloring them. There’s also a rather odd prevalence of ascending to Heaven in these accounts and cases of people describing torment or something like Hell, while certainly not unheard of in the literature, are exceedingly rare. This either means that much of humanity is good and could look forward to a blissful afterlife, or that most people experience a natural high before death so they feel peaceful and at ease, dreaming of Heaven, while others still feel pain and see Hell.

And this is when Occam’s Razor has to come into play. The second assumption, while not very comforting or marketable to believers who still doubt the idea of an afterlife, makes the fewest, and the most probable assumptions, and if therefore more likely to be true in the absence of a stronger case for a genuine Heaven. We tend to choose the afterlife version of the story since we’re all fundamentally scared of death and no amount of arguing why death is natural or how it just has to happen and there’s nothing we can do about it makes this fear any less. The stories give us hope that we won’t simply cease to exist one day. But whereas believers are satisfied by anecdotal tales, the skeptics feel that we deserve more than just hope being spoon-fed to us. If an afterlife exists, we want to know for sure. We want empirical data. And that’s why trying to sell a story that tickles those who already believe or want to believe in the worst of ways is so rage-inducing to so many skeptics. We need truth and facts to deal with the real world, not truths that people want to hear and facts they can discard at will when they don’t match their fantasy.

And the treatment she deserves is mockery. You may remember some of Greenfield’s greatest hits in being a self-important dolt such as declaring that Hawking’s unapologetic pronouncement doubting the existence of a deity is just as radical as the Taliban’s imposition of their religion by violence and terror, and claiming that the internet is dangerous for kids and young adults since it rewires their brains, going as far as to claim that web surfing can cause autism. We can call the Baroness many things but really, none of them should imply expertise or intelligence if we really wanted to be honest. It doesn’t take a neuroscientist to see the problem with her claim that kids’ brains being rewired as they browse the web must be dangerous. After all, if you managed to stay awake in your freshman biology class in high school, you’ll nod along with this snarky little snippet from The Guardian’s post about Greenfield’s technophobic nonsense…

Partially respected neuroscientist Dr. Dean Burnett has called for an outright ban on this post, amid fears that it could cause untold damage to younger, impressionable people. “If people read this blogpost, they run the risk of remembering it for more than a few seconds. This means they have formed long-term memories, which are supported by synaptic changes. Ergo, reading this online blog has caused physical changes in the brain. And that’s bad, right? The brain undergoing physical changes is essentially what supports our ability to learn pretty much anything , which is crucial for our survival, but this must be a bad … because it involves the internet.”

Just like anti-vaccine activists rebel at the notion of inoculating children with antigens to trigger an immune response to prepare the body to fight a real pathogen but sing praises to exposing their kids to mumps and chicken pox to give them "natural immunity" — despite the fact that the chicken pox you had as a kid can turn into shingles when you’re an adult — so do technophobes like Greenfield shudder at horror when those newfangled computer thingies are the vehicle for a wiring change in a kid’s brain rather than a book. But rather than just come out and say that it’s new and they don’t know it, and hence, don’t like it, they’re busy creating doomsday scenarios in which we’re all turning into idiots and telling us that e-books will destroy the world. Mocking their bad science and questionable logic when it crosses into absurdity isn’t just a fun thing to do, it’s basically our duty as non-Luddites, and I’m happy to see that The Guardian took on this task.