Archives For neuroscience

grumpy_cat_600

A long time ago, I shared one of my favorite jokes about philosophers. It went like this. Once, a president of a large and prestigious university was asked who were his most expensive staff to fund. "Phycisists and computer scientists," he replied without hesitation, "they always want some brand new machine that costs a fortune to build and operate, not like mathematicians who only need paper, pencils, and erasers. Or better yet, my philosophers. Those guys don’t even need the erasers!" Yes, yes, I know, I’m a philosophical phillistine, I’ve been told of this so many times that I should start some sort of contest. But my lack of reverence for the discipline is not helped by philosophers who decide to speak up for their occupation in an age of big data and powerful, new tools for scientific experimentation to propose answers to new and ever more complex real world questions. Case in point, a column by Raymond Tallis declaring that physics is broken so much so that it needs metaphysics to pull itself back together and produce real results.

Physics is a discipline near and dear to my heart because certain subsets of it can be applied to cutting edge hardware, and as someone whose primary focus is distributed computing, the area of computer science which gives us all our massive web applications, cloud storage, and parallel processing, there’s a lot of value in keeping up with the relevant underlying science. And maybe there’s already an inherent bias here when my mind starts to wonder how metaphysics will help someone build a quantum cloud or radically increase hard drive density, but the bigger problem is that Tallis doesn’t seem to have any command of the scientific issues he declares to be in dire need of graybeards in tweed suits pondering the grand mechanics of existence with little more than the p’s and q’s of propositional logic. For example, take his description of why physics has chased itself into a corner with quantum mechanics…

A better-kept secret is that at the heart of quantum mechanics is a disturbing paradox – the so-called measurement problem, arising ultimately out of the Uncertainty Principle – which apparently demonstrates that the very measurements that have established and confirmed quantum theory should be impossible. Oxford philosopher of physics David Wallace has argued that this threatens to make quantum mechanics incoherent which can be remedied only by vastly multiplying worlds.

As science bloggers love to say, this isn’t even wrong. Tallis and Wallace have mixed up three very different concepts into a grab bag of confusion. Quantum mechanics can do very, very odd things that seem to defy the normal flow of time, but there’s nothing that says we can’t know the general topology of a quantum system. The oft cited and abused Uncertainty Principle is based on the fact that certain fundamental building blocks of the universe can function as both a wave and a particle, and each state has its own set of measurements. If you try to treat the blocks as particles, you can measure the properties of the particle state. If you try to treat them as waves, you can only measure the properties of the waves. The problem is that you can’t get both at the same exact time because you have to choose which state you measure. However, what you can do is create a wave packet, where you should get a good, rough approximation of how the block behaves in both states. In other words, measurement of quantum systems is very possible.

All right, so this covers the Uncertainty Principle mixup, what about the other two concepts? The biggest problem in physics today is the lack of unification between the noisy quantum mechanics on the subatomic scale and the ordered patterns of general relativity. String theory and the very popular but nearly impossible to test many worlds theory tries to explain the effects of the basic forces that shape the universe on all scales in terms of different dimensions or leaks from other universes. So when Tallis says that it’s still 40 years and we don’t know which one is right, then piles on his misunderstanding of quantum mechanics on top of Wallace’s seeming inability to tell the difference between multiverses and string theory, he ends up with the mess above. We get a paradox where there isn’t one and scope creep from particle physics into cosmology. Not quite a ringing endorsement of philosophy in physics so far. And then Tallis makes it worse…

The attempt to fit consciousness into the material world, usually by identifying it with activity in the brain, has failed dismally, if only because there is no way of accounting for the fact that certain nerve impulses are supposed to be conscious (of themselves or of the world) while the overwhelming majority (physically essentially the same) are not. In short, physics does not allow for the strange fact that matter reveals itself to material objects (such as physicists).

Again, a grab bag of not even wrong is supposed to sell us on the idea that a philosopher could help where our tools are pushed to their limits. Considering that Tallis dismisses the entire idea that neuroscience as a discipline has any merit, no wonder that he proclaims that we don’t have any clue of what consciousness is from a biological perspective. The fact is that we do have lots of clues. Certain patterns of brain activity are strongly associated with a person being aware of his or her environment, being able to meaningfully interact, and store and recall information as needed. It’s hardly the full picture of course, but it’s a lot more than Tallis thinks it is. His bizarre claim that scientists consider some nerve pulses to be conscious while the majority are said not to be is downright asinine. Just about every paper on the study of the conscious mind in a peer reviewed, high quality journal refer to consciousness as a product of the entire brain.

The rest of his argument is just a meaningless, vitalist word salad. If brain activity is irrelevant to consciousness, why do healthy living people have certain paterns while those who had massive brain injuries have different ones depending on the site of injury? Why do all those basic brain wave patterns repeat again and again in test after test? Just for the fun of seeing themselves on an EEG machine’s output? And what does it mean that it’s a surprising fact that we can perceive matter around us? Once again, hardly a serious testament to the usefulness of philosophers in science because so far all we got is meaningless questions accusing scientists of being unable to solve problems that aren’t problems by using a couple of buzzwords incorrectly, haphazardly cobbling bits of pieces of different theories into an overreaching statement that initially sounds well researched, but means pretty much nothing. Well, this is at least when we don’t have Tallis outright dismissing the science without explaining what’s wrong with it…

Recent attempts to explain how the universe came out of nothing, which rely on questionable notions such as spontaneous fluctuations in a quantum vacuum, the notion of gravity as negative energy, and the inexplicable free gift of the laws of nature waiting in the wings for the moment of creation, reveal conceptual confusion beneath mathematical sophistication.

Here we get a double whammy of Tallis getting the science wrong and deciding that he doesn’t like the existing ideas because they don’t pass his smell test. He’s combining competing ideas to declare them inconsistent within a unified framework, seeingly unaware that the hypotheses he’s ridiculing aren’t complimentary by design. Yes, we don’t know how the universe was created, all we have is evidence of the Big Bang and we want to know exactly what banged and how. This is why we have competing theories about quantum fluxes, virtual particles, branes, and all sorts of other mathematical ideas created in a giant brainstorm, waiting to be tested for any hint of a real application to observable phenomena. Pop sci magazines might declare that math proved that a stray quantum particle caused the Big Bang or that we were all vomited out by some giant black hole, or are living in the event horizon of one, but in reality, that math is just one idea. So yes, Tallis is right about the confusion under the algebra, but he’s wrong about why it exists.

And here’s the bottom line. If the philosopher trying to make the case for this profession’s need for inclusion into the realms of physics and neuroscience doesn’t understand what the problems are, what the fields do, and how the fields work, why would we even want to hear how he could help? If you read his entire column, he never does explain how, but really, after all his whoppers and not even wrongs, do you care? Philosophers are useful when you want to define a process or wrap your head around where to start your research on a complex topic, like how to create an artificial intelligence. But past that, hard numbers and experiments are required to figure out the truth, otherwise, all we have are debates about semantics which at some point may well turn into questions of what it means to exist in the first place. Not to say that this last part is not a debate worth having, but it doesn’t add much to a field where we can actually measure and calculate a real answer to a real question and apply what we learn to dive even further.

Share

brainpower

According to a widely reported paper by accomplished molecular geneticist Jerry Crabtree, the human species is getting ever less intelligent because our society removed the selective drives to nurture intelligence and get rid of mutations that can make us dumber. This is not a new idea by any means, in fact it’s been a science fiction trope for many years and had it’s own movie to remind us of the gloom and doom that awaits us if we don’t hit the books: Idiocracy. Crabtree’s addition to it revolves around some 5,000 genes he identified as playing a role in intelligence by analyzing the genetic roots of certain types of mental retardation. Then, he posits that because we tend to live in large, generally supportive communities, we don’t have to be very smart to get to a reproductive age and have plenty of offspring. Should mutations that make us duller rear their ugly heads in the next few thousand years, there’s no selective pressure to weed them out because the now dumber future humans will still be able to survive and reproduce.

Evolution does have its downsides, true, but Crabtree ignores two major issues with his idea of humanity’s evolutionary trajectory. The first is that he ignores beneficial mutations and that just two or three negative mutations won’t necessarily stunt our brains. Geneticists who reviewed his paper and decided to comment say that Crabtree’s gloom and doom just isn’t warranted by the evidence he presents, and that his statistical analysis leaves a lot to be desired. The second big issue, one that I haven’t yet seen addressed, is that Crabtree doesn’t seem to have any working definition of intelligence. These are not the days of eugenicists deluding themselves about their genetic superiority to all life on Earth and most scientifically literate people know that survival of the fittest wasn’t Darwin’s description of natural selection, but a catchphrase created by Herbert Spencer. Natural selection is the survival of the good enough in a particular environment, so we could well argue that as long as we’re smart enough to survive and reproduce, we’re fine.

This means that Crabtree’s description of us being intellectual inferiors of our ancient ancestors is at best, irrelevant and at worst pointless. However, it’s also very telling because it fits so well with the typical assessment of modern societies by eugenicists. They look at the great names in history, both scientific and creative, and wonder where our geniuses are. But they forget that we do have plenty of modern polymaths and brilliant scientists and that in Newton’s day, the typical person was illiterate and had no idea that there was such a thing as gravity or optics and really couldn’t be bothered to give less of a damn. Also, how do we define genius anyway? With an IQ test? We know those only measure certain pattern recognition and logic skills and anyone could learn how to score highly on them with enough practice. You can practice test your way to be the next Mensa member so you can talk about being in Mensa and how high your IQ scores were, which in my experience tend to be the predominant activities of Mensa members. But they are members of an organization created to guide us dullards to a better tomorrow after all…

But if IQ scores are a woefully incomplete measure of intelligence, what isn’t? Depends on who’s doing the measuring and by what metric. One of the most commonly cited factoids from those in agreement with Crabtree is how much time is being spent on Facebook an watching reality TV instead of reading the classics and inventing warp drives or whatnot. But is what we usually tend to call book smarts necessary for survival? What we consider to be trivial knowledge for children today was once considered the realm of brilliant, highly educated nobles. Wouldn’t that make us smarter than our ancestors because we’ve been able to parse the knowledge they accumulated to find the most useful and important theories and ideas, disseminate them to billions, and make things they couldn’t have even imagined in their day? How would Aristotle react to a computer? What would Hannibal think of a GPS? Would the deleterious genetic changes Crabtree sees as an unwelcome probability hamper our ability to run a society, and if so, how?

Without knowing how he views intelligence and how he measures it, all we have is an ominous warning and one that single-mindedly focuses only on potential negatives rather than entertain potential positives alongside them, and making conclusions about their impact on a somewhat nebulous concept which isn’t defined enough to support such conclusions. In fact, the jury is still out on how much intelligence is nature and how much is nurture, especially when we consider a number of failed experiments with designer babies who were supposed to be born geniuses. We can look at families of people considered to be very intelligent and note that they tend to have smart kids. But are the kids smart because their parents are smart or because they’re driven to learn by parents who greatly value academics? We don’t know, but to evolution, all that matters is that these kids secure a mate and reproduce. To look for selection’s role beyond that seems more like an exercise in confirmation bias than a scientific investigation into the origins of human intelligence. That research is much more complex and elaborate than gene counting…

Share

crysis cyborg

Ray Kurzweil, the tech prophet reporters love to quote when it comes to our coming immortality courtesy of incredible machines being invented as we speak, despite his rather sketchy track record of predicting long term tech trends, has a new book laying out the blueprint for reverse-engineering the human mind. You see, in Kurzwelian theories, being able to map out the human brain means that we’ll be able to create a digital version of it, doing away with the neurons and replacing them with their digital equivalents while preserving your unique sense of self. His new ideas are definitely a step in the right direction and are much improved from his original notions of mind uploading, the ones that triggered many a back and forth with the Singularity Institute’s fellows and fans on this blog. Unfortunately, as reviewers astutely note, his conception of how a brain works on a macro scale is still simplistic to a glaring fault, so instead of a theory of how an artificial mind based off our brains should work, he presents vague, hopeful overviews.

Here’s the problem. Using fMRI we can identify what parts of the brain seem to be involved in a particular process. If we see a certain cortex light up every time we’re testing a very specific skill in every test subject, it’s probably a safe bet that this cortex has something to do with the skill in question. However, we can’t really say with 100% certainty that this cortex is responsible for this skill because this cortex doesn’t work in a vacuum. There are hundreds of billions of neurons in the brain and at any given time, 99% of them are doing something. It would seem bizarre to get the sort of skin-deep look that fMRI can offer and draw sweeping conclusions without taking the constantly buzzing brain cells around an active area into account. How involved are they? How deep does a particular thought process go? What other nodes are involved? How much of that activity is noise and how much is signal? We’re just not sure. Neurons are so numerous and so active that tracing the entire connectome is a daunting task, especially when we consider that every connectome is unique, albeit with very general similarities across species.

We know enough to point to areas we think play key roles but we also know that areas can and do overlap, which means that we don’t necessarily have the full picture of how the brain carries out complex processes. But that doesn’t give Kurzweil pause as he boldly tries to explain how a computer would handle some sort of classification or behavioral task and arguing that since the brain can be separated into sections, it should also behave in much the same way. And since a brain and a computer could tackle the problem in a similar manner, he continues, we could swap out a certain part of the brain and replace it with a computer analog. This is how you would tend go about doing something so complex in a sci-fi movie based on speculative articles about the inner workings of the brain, but certainly not how you’d actually do that in the real world where brains are messy structures that evolved to be good at cognition, not to be compartmentalized machines with discrete problem-solving functions for each module. Just because they’ve been presented as such on a regular basis over the last few years, doesn’t mean they are.

Reverse-engineering the brain would be an amazing feat and there’s certainly a lot of excellent neuroscience being done. But if anything, this new research shows how complex the mind really is and how erroneous it is to simply assume that an fMRI blotch tells us the whole story. Those who actually do the research and study cognition certainly understand the caveats in the basic maps of brain function used today, but lot of popular, high profile neuroscience writers simply go for broke with bold, categorical statements about which part of the brain does what and how we could manipulate or even improve it citing just a few still speculative studies in support. Kurzweil is no different. Backed with papers which describe something he can use in support for his view of the human brain of being just an imperfect analog computer defined by the genome, he gives his readers the impression that we know a lot more than we really do and can take steps beyond those we can realistically take. But then again, keep in mind that Kurzweil’s goal is to make it to the year 2045, when he believes computers will make humans immortal, and at 64, he’s certainly very acutely aware of his own mortality, and needs to stay optimistic about his future…

Share

jason mask

Here’s a fun fact for you. If you zap someone with a powerful enough magnetic field, you could change this person’s behavior and not always for the best. In fact, you could even zap someone into a state of cold, callous sociopathy if you know where to aim, at least for a short while. Yes, the effects do wear off, but it does seem perfectly plausible that the same effect could be easily harnessed and prolonged by a chemical cocktail and we’ve known that behavior can be altered with the right tools. So of course conspiracy theorists around the world were wondering if sinister military officers or politicians with little concern for their fellow humans would start injecting some people with a psychopath-killer-in-a-syringe serum and setting them loose on a battlefield to do unspeakable evil, acting as shock troops before or during an invasion. The answer is twofold. In theory, yes, they could. In practice, the results would vary widely and can easily backfire and we already have plenty of sociopaths available for building a small army of shock troops. Just ask the Pakistani ISI if you’re curious, and while you’re at it, ask how well it’s worked for them…

Basically, the issue here is that there are limits to which you can change someone’s behavior as well as for how long. In the article above, the subject feels less empathetic and inhibited, but his psychopathy only extends to taking more risks in a video game and pocketing an uncollected tip which he promptly pays back after returning to normal. His comparison point is a special forces soldier who had extensive training and whose skills were honed in real wars. This doesn’t tell us much because military training is a major variable that’s overlooked in such stories. How likely is our non-military test subject to injure or kill someone in a real fight? Probably not very, and here is why. If you ever take a martial arts class, you’ll spend the first few weeks apologizing if you do manage to land a punch on your sparring partner and the instructors will yell at you for going far too easy on your blows and tackles. You’ll shy away from jabs and your natural instinct will be to flinch or fall back when attacked, not to calmly stand your ground. Humans are social creatures and they tend to be averse to hurting each other in the vast majority of cases.

True, we can be induced into hurting others with money or threats, and we do know how to train someone not to shy away from fights and to overcome the natural aversion to real violence. But the experimental subject in question appears to have never had any combat training or martial arts background. He may be less averse to getting into a fight because his impulse control was radically lowered, but chances are that he’ll run for it if he picks a fight with someone who’s able to hold his own or when he realizes that he’s about to get hurt. Likewise, he’s unlikely to punch as hard or as accurately as someone who’s had some real training. All in all, he may be a major menace to unwatched tips in a bar and in Grand Theft Auto, but he’s most probably not a threat to flesh and blood humans. His former special forces friend? Absolutely, but he seems to have no need to be zapped into an emotionally detached state and has his impulses pretty well under control. On top of that, were we to just zap or drag a random person into a psychopathic malice, there’s simply no telling whether he would turn on his friends and handlers or not, a chance no evil, self-respecting mastermind of the New World Order would want to take.

And that brings us back to the very real problem of an abundance of psychopaths to do a dirty job for someone willing to pay. Just look at what happened in Afghanistan during and soon after the Soviet occupation. The mujahedeen trained to fight a guerilla war against the Red Army as well as become proxy shock troops for the ISI in a potential war with India, were not given drugs or magnetic bursts to the brain. They were recruited based on their religious convictions, trained to channel their loathing for the occupying infidels into violence, and let loose on Soviet troops. No artificial inducement or neural intervention was even needed. Today they quire regularly turn on their former handlers, kill people who displease them with near impunity and absolutely zero question or moral qualms, and have generally proved to be a far bigger threat and liability than an asymmetric military asset. Considering that real psychopaths are so dangerous, why create an entire army of them with experimental chemicals or magnetic beams? If indiscriminate murder is your goal, fully automated robots are the easiest way to go, not average people or soldiers just out of basic with their impulse control drugged and zapped out of existence…

Share

barefoot

Skeptics and vocal atheists across the web fumed when Newsweek published a cover story that proclaimed the afterlife to be real based on a firsthand account of a neurosurgeon who nearly lost his bout with meningitis. His tale is hardly atypical from ones we’ve heard many times before across a wide variety of patients who had one foot in the grave and were revived; lush greenery and white fluffy clouds leading to a wonderful and peaceful place, a companion of some sort for what looked like a guided tour of Heaven, all the pieces are there. Such consistency is used by the faithful to say that there must be an afterlife. How else could the stories be so consistent and feature the same elements? If the patients were simply hallucinating as their brains were slowly but surely shutting down, wouldn’t their experiences be radically different? And aren’t a number of them extremely difficult to explain with what we know about how the brain functions?

It’s not as if people could sense when they’re about to die and are constantly bombarded with a description of how they should ascend to Heaven for eternal peace and rest. Wait a minute, wait a minute… They can and they are. So wouldn’t it make sense that so many near death accounts of an ascension to an afterlife follow the same pattern because the patients who remember their alleged journey to the great beyond are told day in, day out how this pattern should go? Most of the tales we get come from the Western world and have a very heavy Judeo-Christian influence coloring them. There’s also a rather odd prevalence of ascending to Heaven in these accounts and cases of people describing torment or something like Hell, while certainly not unheard of in the literature, are exceedingly rare. This either means that much of humanity is good and could look forward to a blissful afterlife, or that most people experience a natural high before death so they feel peaceful and at ease, dreaming of Heaven, while others still feel pain and see Hell.

And this is when Occam’s Razor has to come into play. The second assumption, while not very comforting or marketable to believers who still doubt the idea of an afterlife, makes the fewest, and the most probable assumptions, and if therefore more likely to be true in the absence of a stronger case for a genuine Heaven. We tend to choose the afterlife version of the story since we’re all fundamentally scared of death and no amount of arguing why death is natural or how it just has to happen and there’s nothing we can do about it makes this fear any less. The stories give us hope that we won’t simply cease to exist one day. But whereas believers are satisfied by anecdotal tales, the skeptics feel that we deserve more than just hope being spoon-fed to us. If an afterlife exists, we want to know for sure. We want empirical data. And that’s why trying to sell a story that tickles those who already believe or want to believe in the worst of ways is so rage-inducing to so many skeptics. We need truth and facts to deal with the real world, not truths that people want to hear and facts they can discard at will when they don’t match their fantasy.

Share

And the treatment she deserves is mockery. You may remember some of Greenfield’s greatest hits in being a self-important dolt such as declaring that Hawking’s unapologetic pronouncement doubting the existence of a deity is just as radical as the Taliban’s imposition of their religion by violence and terror, and claiming that the internet is dangerous for kids and young adults since it rewires their brains, going as far as to claim that web surfing can cause autism. We can call the Baroness many things but really, none of them should imply expertise or intelligence if we really wanted to be honest. It doesn’t take a neuroscientist to see the problem with her claim that kids’ brains being rewired as they browse the web must be dangerous. After all, if you managed to stay awake in your freshman biology class in high school, you’ll nod along with this snarky little snippet from The Guardian’s post about Greenfield’s technophobic nonsense…

Partially respected neuroscientist Dr. Dean Burnett has called for an outright ban on this post, amid fears that it could cause untold damage to younger, impressionable people. “If people read this blogpost, they run the risk of remembering it for more than a few seconds. This means they have formed long-term memories, which are supported by synaptic changes. Ergo, reading this online blog has caused physical changes in the brain. And that’s bad, right? The brain undergoing physical changes is essentially what supports our ability to learn pretty much anything , which is crucial for our survival, but this must be a bad … because it involves the internet.”

Just like anti-vaccine activists rebel at the notion of inoculating children with antigens to trigger an immune response to prepare the body to fight a real pathogen but sing praises to exposing their kids to mumps and chicken pox to give them "natural immunity" — despite the fact that the chicken pox you had as a kid can turn into shingles when you’re an adult — so do technophobes like Greenfield shudder at horror when those newfangled computer thingies are the vehicle for a wiring change in a kid’s brain rather than a book. But rather than just come out and say that it’s new and they don’t know it, and hence, don’t like it, they’re busy creating doomsday scenarios in which we’re all turning into idiots and telling us that e-books will destroy the world. Mocking their bad science and questionable logic when it crosses into absurdity isn’t just a fun thing to do, it’s basically our duty as non-Luddites, and I’m happy to see that The Guardian took on this task.

Share