Archives For biology

alien bacteria

We’re using far too many antibiotics. That has been the cry from the FDA and the WHO for the last several years as more and more antibiotic-resistant strains have been found after they had colonized or killed patients. Of course these bacteria aren’t completely immune to our arsenals of drugs, they’re just harder to kill with certain antibiotics or require different ones, but a rather small, yet unsettling number, have required doctors to use every last antibacterial weapon they had available to even make a dent in their populations. There’s not much we can do because in effect, we’re fighting evolution. The more antibiotics we throw at the bacteria, the more chances we give for resistant strains to survive and thrive. Doctors are starting to prescribe less and the pressure on farmers to stop prophylactic use of antibiotics is mounting, but we’re still overdoing it and the problem is growing and in need of some very creative new solutions.

Enter a genetic engineering technique known as CRISPR-Cas9 which replaces DNA sequences that short snippets of RNA are encoded to identity with ones provided by scientists. It’s not new by any means, but this is the first time it has been used in an evolutionary experiment intended to stem the rise of antibiotic resistance. Israeli researchers essentially gave bacterial colony an immunity to a virus, but at the cost of deleting genes which gave it antibacterial resistance. The bacteria happily propagated the immunity as they grew while maintaining the new weaknesses to antibiotics which were only marginally effective on them before. There’s a real advantage for the bacteria to propagate this new mutation because the virus to which it was now immune was lethal, acting as the greater selective pressure, and the susceptibility to antibiotics just wasn’t an important factor, so the bacteria acted like it got a fair deal.

Even better, edits were made by a specially engineered virus, meaning you can, in theory, just infect bacteria-prone surfaces with it and demolish their antibiotic resistance, right? Well, yes, it would be possible. However, the researchers worry that new antibiotic resistant mutations can still evolve and that there’s no way to prevent the bacteria’s genetic drifts from accepting genes for viral immunity while holding on to its existing antibacterial mechanisms. But this technique is still useful for reducing the number of resistant bacteria or targeting strains with very well known resistance mechanisms to allow doctors to use existing antibiotics. Ultimately, what will help the most would be more research into new antibiotics, curtailing their use in doctors’ offices for any viral infection regardless of the patients’ complaints, and eliminating preventative use of animal antibiotics on farms. Still, research like this can still help us identify new resistant strains and give us a fighting chance to slow them down while we find new ways to fight them.

See: Yosef, I., et. al. (2015). Temperate and lytic bacteriophages programmed to sensitize and kill antibiotic-resistant bacteria PNAS DOI: 10.1073/pnas.1500107112

lab mouse

While studying what effect cell division has on cancer risk, a team of scientists decided to make mice that that produced excess levels of a protein called BubR1 and got results that seem way too promising at first blush. Not only were the engineered mice a third less likely to develop lung and skin cancers after exposure to potent carcinogens than control animals, but they had twice the endurance, lived 15% longer, and were less than half as likely to develop a fatal cancer. So what’s the catch? Well, there is none. It’s as if an over-expression of BubR1 is a magical elixir of good health and longevity. This doesn’t mean that this protein couldn’t become our most potent weapon against cancer with enough study or that it must have some sort of side-effect, which is entirely possible since too little BubR1 in humans is associated with premature aging and some forms of cancer, but this is a signal to proceed with optimistic caution.

Mice may have a lot of similarities to humans from a genetic standpoint, but they are a different species so what works well in mice may not always work as well in humans. Likewise, if we really wanted to be sure of the results, we’d have to test them on thousands of humans over decades, which is a massive undertaking in logistics alone. And since testing the protein modifications in humans would be such a major effort, the researchers need to know exactly how BubR1 does all the wonderful things it does, breaking down its role by chemical reaction and testing each factor on its own. The work may take decades to complete but if it’s correct, we may have found a way to extend and improve our lives in a humble protein. Combined with other ongoing work, there’s some very real science behind extending human lifespans and modifying our genomes for the better. I just hope we don’t get a little too carried away and treat editorials treating BubR1, gene therapy on a massive scale, and cell reprogramming technology as just around the corner with the necessary healthy skepticism, since the research is by no means complete…

See: Baker, D., et. al. (2012). Increased expression of BubR1 protects against aneuploidy and cancer and extends healthy lifespan Nature Cell Biology DOI: 10.1038/ncb2643

trilobite fossil

A while ago, creationists in South Korea persuaded textbook publishers to start removing entire chapters discussing proof for evolution in a move I likened to an overzealous and very dishonest prosecutor in court demanding that the judge refuse to admit evidence exonerating defendants simply because he would lose his case if such evidence were introduced. After a campaign by scientists with the relevant credentials, the South Korean Ministry of Education and Technology decided to formally review the creationists’ complaints and told them that science classes won’t be factually neutered at their request. Dismissing their arguments about dinosaurs being the starting lineage for birds as vague and invalid, the ministry did agree that a chapter about the evolution of horses was lacking in proper scientific rigor. But whereas the creationists tried to excise the chapter completely, the ministry said that the chapter should be rewritten with a more current picture of how horses evolved. In other words, their solution to creationist’s complaints of poor evidence is to have textbook publishers update the books with better science.

Of course no science will ever be good enough for creationists when it comes to biology, but it’s the right approach. If the proof presented in the textbooks isn’t compelling enough, we should be presenting better and more accurate evidence rather than bow to the Nirvana fallacy espoused by so many creationists. If we don’t know how every molecule in living things interacts with every other molecule, something we can never know for certain unless we were to manifest Laplace’s Demon and ask it some questions, it doesn’t mean that the other 95% of the theory isn’t a solid body of facts and has to be discarded. After all, you don’t rewrite an entire term paper if you find out you misspelled a word, or quoted something incorrectly. You fix the mistake and continue, revising along the way until you get a more accurate body of work. But to the fundamentalist mindset, that’s cheating. They’re used to a text they believe hasn’t changed for thousands and thousands of years — don’t tell them that it was rewritten numerous times and was edited by a self-appointed group that rejected more than a hundred parts and pieces of its first drafts — and when we correct our theories with better facts, they think we’re changing our story.

And yes, we are to an extent but our goal isn’t consistency, it’s accuracy, whereas to a religious adherent, perceived consistency means that the text must be accurate in a twist on the ad hoc, ergo prompter hoc line of reasoning. No one changed the holy book, therefore it must be true in the first place and when science revises its ideas, it must mean that the ideas were flawed from the start, otherwise they wouldn’t need correcting. And again, there’s a point there because few big ideas have been only expanded; many have been outright rewritten. Newton’s work was not replaced by general relativity, as popularly claimed. Without Newton, there would be no general or special relativity in the first place, and without Darwin’s natural selection and Mendel’s punnet squares, genetics wouldn’t have the kind of context that lets us decipher genomes to the extent we can today. We can also cite atomic theory and electromagnetism as examples of science that was updated rather than rewritten. But these are the exceptions. The aether gave way to mostly empty space, plate tectonics turned geology on its head, miasmas and the four humors fell to germ theory, ancient astrology lost the metaphysics and became astronomy, etc.

Where we see progress, however, the fundamentalists see heathens who either lack the proper spiritual guidance to see the truth, the way, and the light, or nefarious heretics who only want the righteous to falter in their beliefs and abandon their faith, constantly changing their story when caught on a mistake or missing something important from their theories. This is why they seldom even bother to understand what the theories they so fiercely reject actually say, going so far as to constantly use an argument that actually provides evidence for evolution as a criticism of it if not just blithely dismissing it as fairy tales for adults as compared to the eternal truth of a talking snake and a naked woman dooming humanity to mortality and disease. Why bother learning the science if it’s just going to change? Seeing the massive flaws in this kind of reasoning, those in charge of education and scientific advancement in South Korea defaulted to the science, prone to change as it may be, but ground in proof, evidence, and able to revise itself as new facts come to light rather than the kind of intellectual cowardice that believe it’s just fine and dandy to censor the evidence it doesn’t like, and then prey on the subsequent factual vacuum.

Every time an experiment manipulating evolution hits the news, there’s always an eager throng of people who insist that the very fact that the biologists intervened and steered the forces of selection or mutations to doing the experiment means that we now have proof of a designed involved in evolution. Just take yesterday’s study on the possible emergence of multicellularity. According to the creationist crowd, if the biologists didn’t trigger the selective influences on the yeast, it would’ve remained the same and their meddling is therefore proof that without an external force, multicellularity wouldn’t have happened. Remember the study cited by Lehrer in his indictment of scientists’ seemingly slow progress? That’s exactly where it applies. Just because a biologist shook a beaker or changed a few genes to see what will happen according to the rules of evolution today isn’t proof that someone else also shook the beaker or changed a few genes billions of years ago, but it’s a rather neat and tidy story that’s easy to digest and hence it gets cited by those who are looking to justify a belief. It’s a backward and very self-centered approach, one that essentially promotes a two-tiered fallacy as a fact.

An applicable old cliché would be the one often used by creationists regarding a paining and a painter. If they see a painting, someone must have painted it since paintings don’t paint themselves. Therefore, since we’re not seeing stones turn into bacterial film out of the blue, someone must have created life. Airtight logic, right? Well, no, not at all. We know that paintings have a painter because we’ve seen painters make paintings. If we doubt a painting’s origins, we could always perform a chemical analysis on them and see that yes, it’s canvas with paint on it and we know that there’s a group of painters out there who do similar work. We can even track down the original painter of a more recent work and ask her to replicate her efforts. With life, matters are much less cut and dry because we’ve never seen a designer or an architect of living things. How do we confirm that living things are made rather than self-organizing? Where do we find the designer? No, in our hearts and in a spiritual universe all around us are not valid answers because they don’t pinpoint a culprit we could ask about the creation of life. And just because scientists did something interesting in the lab doesn’t mean that the very same experiment also happened in nature, much less that a hyper-intelligent being was behind it.

Having dealt with the non-sequitur we can now move on to the argument by assertion on which this entire line of thinking is based. Just like all intelligent design talking points, which are now living well past their sell by date and never actually worked, this one relies on asserting that there must be an entity capable of creating living things and that this entity is singular. This proposition alone requires a few hundred lines of evidence to establish in any way, shape or form, and merely asserting that there’s a singular designer is not proof. If your goal is to work backwards from the standpoint that some unnamed designer (or you could just say save both the time and the trouble and say God since this "designer" facade isn’t fooling anyone), created all life and we have to work backwards form this premise, the assertion that manipulating evolution for experiments is proof of your deity makes sense. But that’s not a valid point with which to start. We have to work from what we know onwards, otherwise we’re just deluding ourselves by inventing ways to wedge evidence into a predetermined conclusion. Under this pretense, a scientist tweaking evolution the lab has to be proof that a deity had to have done something similar in the past because if he didn’t, then the chain of events don’t match what we want to believe happened. That’s not a reasonable or logical argument. It’s just wishful thinking.

Sometimes I can only sympathize with the kind of frustrating setbacks experienced by biologists. Whereas an entire area of STEM disciplines can rely on formulas and basic theory to get them at least close to where they need to be, biology seems to change its mind on a dime, and what seem like very straightforward and simple ideas can end up grinding to a screeching halt when scaled up beyond a few cells. From promising research into greatly increasing lifespan to countless potential cancer therapies, some of the failed efforts by biologists make me wonder if working in the discipline ever feels like battling Murphy’s Law. The reason I say this has to do with a just published study citing a failure to implant reprogrammed stem cells from an organism’s own body, thereby protecting them from becoming a target for the subject’s immune system as they try to repair the targeted tissues and organs. It turns out that activating the signals that encourage stem cells to develop new structures sets off the immune response and the seemingly friendly cells are now seen as pathogenic.

Whoops. The problem, it seems, lies with two genes, Zg16 and Hormad1. During a period in which a fetus is developing the distinction between its own tissues and that of foreign entities, these genes may be turned off while in the reprogrammed stem cells they’re active. As the stem cells try to grow into new shapes, the body’s defenses see these cells as intruders because they’re trying to differentiate and form structures when internal chemistry has turned off this kind of radical development. Forget about being able to internally grow new arms and legs; the cell cultures trying to diversify into them will be annihilated by your immune system. While there’s data to suggest that your stem cells could conceivably be used to grow a new, mature heart or a lung or a liver and implanted back in with minimal fears of a rejection, using your own body’s processes to help out wouldn’t work and the technology needed to make it would have to be that much more complex than it is now. Now, it’s not that this finding completely eliminates the foundation for the key ideas behind regenerative medicine. This setback just tells scientists that there’s much more we need to know to make the process work and warns of the potential for more complications and problems down the road. Biology is just finicky like that.

This is partially why my last take on life extension and radical medicine focused on machinery rather than a biological answer. Every individual is somewhat unique and after billions of years of evolution, we have living things so bizarrely and intricately messy that changing something within them is fiendishly complex. Machine parts are mass produced and can be customized to work around and with biological limitations. We can use them to help build new organs and provide scaffolding and structure for developing stem cells, and one day, maybe even build new organs from bio-compatible materials that won’t be actively attacked by T cells, or with several mutations down the line develop tumors. So when something breaks and wears down, we can throw in new organs and joints to whatever extent biology will let us. But our cells are certainly not just plug-and-play as some popular science journals and overly eager scientists hoped, and we need to make sure that we are as thorough as possible with new clinical ideas and don’t take our conclusions as automatically correct, even if they simply build on the fundamentals of existing theories. And this experiment is just a reminder that we do not know nearly as much as we sometimes tend to think and have a very long time before we can really wield our knowledge of biology’s building blocks to do truly amazing things with our bodies from the bottom up.

Zhao, T., et al. (2011). Immunogenicity of induced pluripotent stem cells Nature DOI: 10.1038/nature10135

Oh concern trolls, those wonderful commenters who try their hardest to put up disclaimer after disclaimer that they’re not at all disagreeing with you but they just have some questions which oddly enough happen to sound an awful lot like the kind of talking points mounted to attack a solid scientific idea, where would science blogs today be without them? They provide not only material for blog posts but a showcase of why it’s still necessary to keep explaining the basics in post after post and how no matter how many times you explain something, a contingent of passionate zealots will always be there to J.A.Q. around before they lose their threads halfway to show their original intent, or just plain state it at their conclusion. Here’s a great example of pseudo-profound questions in a comment to an old post of mine in which I show why evolution is a repeatable science. Note all the pleas that you’re not reading the questions of a creationist at the beginning, followed by… questions I’d expect to find at one of Bill Dembski’s pontifications, questions such as…

Even if there are random repeatable mutations, which I don’t deny; how many of them are usable and beneficial to the organism? If species have been evolving over the past 100 million years, shouldn’t there be overwhelming samples of transitional fossils, instead of a random breaking edge one here and there? We should be finding these things all over, in every continent, even in our own back yards.

So he won’t deny that repeatable mutations do exist but yet must preface the question with a conditional? That seems a little odd. Why would you place a conditional statement on something you don’t deny? Anyway, we’re well aware that beneficial mutations happen rarely, representing maybe a few percent of all mutations at best while the vast majority are negligible or benign, and another small percentage are actively harmful. But since over millions of years they happen trillions of times, they’re going to occur and be selected towards on a pretty regular basis. Evolution is all about numbers. It needs just that half a percent success rate to keep going and the overwhelming amount of genetic dead ends is the cost of evolving. As for the transitional fossils, we really do find them all over the place, and we can even predict where they’ll be found based on what we discover about the overall evolutionary timeline of our planet. Though in the purest sense, since organisms change on a constant basis, every fossil is a transition to something else or a dead end so this question is irrelevant and moot. Of course our commenter with questions about evolution doesn’t seem to understand speciation, which would explain the now ancient canard about transitional fossils.

Genetic drift and the inability to mate, doesn’t necessarily change a species into a new species, because there are physical restrains involved like size and appeal. Polar and Grizzly [bears] aren’t different species but two vastly different breeds within the same species.

Um, no. They’re related species, having branched off from the same ancestor, but until very recently, they lived in different environments and did not interact with each other. They still don’t by in large, and declaring them to be the same species seems a lot like creationists insisting that simply because two species look alike, they must be one and the same. Now, granted, the process of defining species is not perfect, but that’s because a perfect and clean separation between two species only exists on paper. Nature is much messier than that so we have to go by basic guidelines, like whether the populations mate with each other on a regular basis. And since we’re on the subject of speciation, what pray tell are breeds? They’re even more arbitrary, based on very cursory examinations of an animal’s morphology. Plus, here’s a funny thing. Since we’ve been separating our closest animal companions, dogs, into many separate and wildly different breeds, are they now a different or distinct species from wolves? By creationist logic, we’d have to say that dogs are actually just wolves, but yet, their populations are separate and some breeds of dogs could never mate with others because of the major differences in sizes and new pressures in sexual selection. Again, nature is rather messy.

Science cannot “prove” the past, because time is a specific, relative variable. The past can only be believed, not proven. If a person won’t believe solid evidence like yesterday’s newspaper[s], then that’s up to them.

I’m not going to sugarcoat it. That’s just an inane statement. If scientists can show that a particular creature in a fossil existed a certain amount of time ago with carbon or radiometric dating, they proved something about a world that existed in the past. End of story. If by a forensic examination I can trace the timeline of an event that happened a hundred years ago, I would’ve proven a number of facts. Even more plainly, if I go to a junkyard of the far future and dig up an old Zune player to show befuddled youths of the mid 21st century that yes, a rather long time ago there was an attempt to make an MP3 player which was not an iPod, I would’ve proven a factual statement about the past. Not believing solid evidence on a personal whim doesn’t seem like some scientific deficiency to me, more of a personality trait. And this statement seems like a post-modernist epistemological quip in which “everything’s like, only your opinion man.” Of course it’s a setup for the appeal you were probably expecting since the very beginning of this thinly disguised treatise…

I wonder how us, humans here on our little planet, in our little solar system, in our little galaxy can conclude from our tiny perspective that we know how the universe was formed, that a higher being, God doesn’t exist and couldn’t have been involved in creating this universe.

Define what you mean by God, identify the signs of involvement and how we know they’re real signs of his and only his involvement in the creation of the universe, explain how the universe was created, present your proof for it in terms of tangible data, and when you actually have a question that can be discussed with real science and real evidence rather than appeals to ignorance, we can discuss this. Until then, this is little more than yet another, perhaps quadrillionth, edition of the “I don’t know, ergo God” argument.

One of the classic ideas in science fiction is the concept of wetware, a hybrid of biology and electronics which would allow just about any living thing with a brain to hook up to a machine and carry out computing tasks we could never accomplish solely with brains or solely with machinery. As noted last week, quantum computers will exist only to solve complex and obscure computing problems, so if we really want to discover how we think and what makes us intelligent and self-aware, they aren’t going to come even close to cutting it. Effective and well-designed wetware, however, could because it would allow us to work cortex by cortex and neuron by neuron to trace the cascade of thoughts and memories in our minds. But this sort of thing is still in the realm of fiction, right? Maybe not. At the University of Wisconsin, researchers recently developed silicon-germanium tubes just big enough for neurons’ axons and dendrites to crawl through and attach themselves, and leaving the main cell body, the soma, on the outside. Since this was an experiment in creating a new environment in which to study neurons outside an organism’s body, they have little to say about their future uses other than to mention the potential for prosthetics, but their work may be a stepping stone towards radical technologies.

Here’s the basic concept. According to the researchers, when scientists study neurons, they tend to use a flat surface coated in amino acids which let neurons adhere as they glob together and try to grow. But that’s not a really good way to study individual neurons and how they communicate because in living things, they exist in three-dimensional lattices and networks, and because when they glob together into a neuron ball on a plate, the already faint electrical noise they produce, creates a cacophony that’s very difficult to decipher. This is of course not to say that neurons in your brain talk to each other with no problem since it’s estimated that much of the electrical hum in a brain is just background noise, but it’s even worse in a ball of neuron culture. Enter the aforementioned silicon-germanium (SiGe) tubes manufactured with diameters of 4 μm to 8.2 μm with the larger diameter used for most of the experiments, and coated with a typical amino acid that attracts neurons, promoting them to stick to their new synthetic habitat. And sure enough, after mice neurons were introduced into the tubes, they quickly began to grow, showing no indication of being affected by any leftover toxicity from the manufacturing process, and exploring the helical structures inside the tubes themselves. Some images even show what look like single axons trying to make their homes within the tubes. Given the data, the team concluded that they’ve successfully created a new type of artificial, non-toxic habitat to study neurons.

And this is where we come to the really big deal and why this research could be a gateway to all sorts of very interesting experiments in the future. Isolating neurons individually in tubes which could be arranged into an elaborate latticework means that we could study how neurons communicate with each other with more ease, since individual neurons’ electrical output can be measured more precisely, and the tubes themselves could be used to get neurons to transmit more carefully artificially-generated signals. Ideally, one could hook up an elaborate prosthetic device wired with these helical SiGe pathways for the neurons to explore and have a fully integrated artificial limb controlled by an outgrowth of the recipient’s nervous system. No more clunky or hard to control robot arms and legs. We’re already working with ever more elaborate and nimble prosthetics, but it would be a quantum leap for the medical industry if a patient’s nerves could simply grow into a new arm or a new leg and give her full control of the limb along with sensations far more like those of the natural body. And likewise, we could also use neurons in computers or develop tiny probes that could isolate and measure the work of an entire cortex down to the level of an individual neuron we we could get a better idea of how we can talk, walk, and solve complex problems. One of the biggest questions we could try to answer first is whether arranging a neurons in a certain way effectively replicates the function of a cortex or whether there’s more to it than than, and if so, what. From there, we could more accurately model the behavior of a brain.

Of course I should caution that these applications are still many years away and will require decades of very thorough research. However, it would be surprising if DARPA and medical companies didn’t try to jump on to do some experiments of their own and find out how far they could actually get with this method of arranging a neuron culture. The potential here might be huge and to not try to take advantage of it would be rather difficult to justify. Well, unless you’re more worried about money than you are about the results of scientific progress and the need to pursue gateway technologies that could lead to radical future breakthroughs in lucrative and crucial markets and areas of research and development…

Reference: Yu, M., et al. (2011). Semiconductor Nanomembrane Tubes: Three-Dimensional Confinement for Controlled Neurite Outgrowth ACS Nano DOI: 10.1021/nn103618d

Well ladies and gentlemen, I’m back from another unexpectedly busy day of giving a computer the equivalent of a lobotomy with a little gem from the IEET, or the Institute for Emerging Ethics and Technologies, which you may remember as the futuristic think tank of my Skeptically Speaking debate partner George Dvorsky. IEET regularly publishes posts looking far into the transhumanist future and this particular one featured a missive by Kyle Munkittrick of Discover’s Science Not Fiction arguing for the redefinition of eugenics so the notions of genetically engineering new generations of humans wouldn’t seem so taboo, especially for conservatives. I’m certainly willing to grant Munkittrick that not every desire to change the human genome is necessarily driven by racism, snobbism, or as a justification for genocidal campaigns, but as I’ve said on the air to George, and will repeat again for the transhumanists in the audience, if you want to engineer a better human through a genetic blueprint manipulated in the lab, you may be playing with fire because evolution is unpredictable. You’re not going to be able to just customize a genome or get behind the steering wheel of the evolutionary process. It’s nothing personal or ideological. It’s just that there’s no steering wheel to get behind. New was, never will be.

Before we go any further, let’s address the very big, bloody elephant in the room. Eugenics was an idea which came from Francis Galton’s gross misunderstanding of Darwin’s work on natural selection, and was often supported with his shoddy statistical analysis which seemed to indicate that with every generation, humanity recedes towards mediocrity across a number of factors like intelligence. What he really discovered was a very frequent statistical phenomenon called regression to the mean. Basically, he was measuring the new highs set by subsequent generations as the new average and it seemed that humans as a whole were basically in a constant rut of mediocrity, never quite living up to the standards set by the best and the brightest of the past generations. If the trend were to continue, the eugenicist argument went, humans would actually regress, and it was up to the best and brightest of society to stop all the mediocre people from reproducing to raise human accomplishment out of nature’s trap. Their methods were responsible for countless atrocities and gave class warfare a whole new meaning, as over-privileged, egomaniacal snobs armed with a pseudoscience believed it was up to them to save humanity by sterilizing, killing, or enslaving those with lighter wallets and no access to education. In modern times, eugenics public supporters have been overwhelmingly snobs and racists, as well as well-meaning ignoramuses who didn’t know anything about genetics, and their experiments failed in very telling and predictable ways. You simply can’t create a superhuman by playing with DNA.

And that brings us back to Munkittrick whose irreverent invocation of eugenics to what one would presume to be an audience with some education in biology and a working understanding of how natural selection works, is actually targeted at Peter Lawler, who wrote a fluffy musing about designer babies on Big Think, the very same site which brought us an asinine lecture on why human evolution supposedly stopped while it did no such thing, and the two have been going at each other with the typical blogger zeal. Lawler rightly argues that biological enhancement of humans is a rather fuzzy concept and involves us messing with the unpredictable forces of nature, triumphantly cheering that he found a scientist who also says so, and spectacularly misses the point that designer babies are far more fantasy than fact, and that every experiment to create one fails for the same reasons. Our genes are in constant flux and we’re constantly undergoing natural selection. We can spend all the time and effort we want into trying to raise kids who become top athletes or try to wire them for a predisposition towards a high IQ (which is a highly dubious and scientifically unsupported idea anyway), and nature will select them based on their environment, not their GPA and what college they’ll attend. All creating a designer baby will do is set the parents up for disappointment and the child for a lifetime of being expected to be an infallible genius and athletic prodigy. When the complexity of the genome and the body’s very elaborate chemistry come into play and neither really works out, all we’ll achieve is reciprocal misery.

So while Munkittrick and Lawler argue about designer babies and the latter’s spot on a political commission which ruled that advanced stem cell research and extreme life extension techniques should not be endorsed by the government because they’re somehow immoral, they might was well be arguing about the unrealistic dynamics of Batman’s utility belt or the dubiousness of Spider Man’s organic web shooters. It’s a comic book trope born from a bastardization of biology and we’re not going to suddenly start creating hordes of geniuses with superhuman athletic abilities to compete with the rest of the world. Instead of realizing that humans have to live in dynamic environments in which a complex array of genes has been narrowed down to help them get along with the chemical reactions and physical forces acting on them every minute of every hour of every day, they’re thinking about customizing them to get better test scores and bigger muscles so they’re really good at football or basketball. Both of them need to get their head out of the clouds and find a little perspective.

We might not be able to beat a natural language search engine or a supercomputer able to crunch every last possible move in a game of chess, but there’s one area where we easily leave just about any machine in the dust. That area? Visual recognition of course. As we saw once before, no computer vision system seems to be able to match what we can do, and even rudimentary simulations of our visual cortexes are more than a match for the top performers in digital image recognition. Why? Well, to put it simply, the brain cheats. Unlike computers, we’ve evolved a myriad of pathways in our brains to filter and process information and over eons, we’ve developed a very good structure for pattern recognition and template matching based on cues ranging from estimating the objects’ distance from us to complex feature interpolation and extrapolation. We can see things that machines can’t, whether the problem is the technology or the data format, and a study on whether the brain somehow compresses visual data sheds a light into what our brains actually do to match what our eyes see to an abstract or specific object in our minds, highlighting the role of one of our neural pathways.

Whatever you see gets transmitted to the occipital lobe in the back of our brain and analyzed by neurons in an area commonly known as V1. Your V1 neurons don’t actually see any detail or complex features. Their task is to see contrasts and identify whether objects are moving or static, and stimulate the next visual cortex, V2, for further filtering and passing on more complex patterns identified in the visual stream to V3. When visual data makes it to the V4 cortex, things get very interesting because neurons in V4 seem to have a strong response to curves and angles. Basically, one could say that V4 is doing a form of feature extraction before passing off the refined image to cortexes that do higher level template and object matching. And it’s that focus on angles and curves that attracted the attention of a neuroscience lab which simulated the behavior of V4 neurons with a computer model. Interestingly, they saw that the fewer neurons were trained to respond to the images in the training set given to them, the more they lit up when curves and acute angles were found in the pictures. The more neurons were stimulated, the more responsive the digital V4 was to flat and shallow outlines. Our V4s are actually compressing incoming visual data, concluded the study. And from what I could tell, it seems that this compression is actually helping V4 neurons perform key feature extraction and enables high-level visual data processing for the next visual cortices.

Here’s why I’m making that conclusion. One of the standard approaches to image recognition while using 2D data is to employ outline extraction algorithms. I’ve mentioned them before, and they are very effective when given a good quality image, finding usable object shapes. Then, their results are used to build algorithms for identifying key features, or matching the outlines to certain masks quantifying proportions and dimensions in their databases. Today, we generally deploy genetic algorithms to do that, trying to build associations while in the past, expert systems were a more common approach. Our brains don’t necessarily train like that, but they do base object identification on outlines and basic shapes. Remove all features from a human figure and you still know that you’re looking at a representation of a human because it has the right proportion and features, features you can see by their position and curvature against a background which gives enough contrast for a human outline to be identifiable. So when your V4 lights up as the acute angles and curves start to show up, it can stimulate neurons which respond to objects with those angles and curves, narrowing down the possible identifications for the objects you’re seeing. This means that feature extraction algorithms are actually on the right path. It’s just that their task is made more difficult by using flat images, but that’s still an ongoing area of research and gets really technical and system-specific, so I’m going to leave it at that for now.

So does this model of V4’s visual information handling capabilities take us one step closer to giving a future computer system the ability to see like we do? Sadly, no. It just elaborates the piece of a puzzle which we’ve already found a long time ago. The big problem is that brains use a lot of neurons to process and filter data, taking advantage of countless evolutionary refinements over countless generations. We’re trying to do that in systems which weren’t built to work with information like neurons, or really respond like they do because how exactly neurons store information and how they retrieve it when stimulated is still rather fuzzy to us. So when building a visual recognition algorithm, we’re actually trying to replicate processes we don’t understand at the required level of detail to truly replicate, and many of the advances we make in this area are usually based on applying statistics and using the computer’s brute computational force to come up with an answer. And if you flip your training images on their sides or look at them from a different angle, you have to do that computation all over again to get a proper response from your system. As noted above, when we go up against machines in the realm of visual recognition, we’re cheating, putting eons of ongoing evolution and hundreds of millions of neurons against decades of so far incomplete research and probabilistic trial and error…

See: Carlson, E., Rasquinha, R., Zhang, K., & Connor, C. (2011). A Sparse Object Coding Scheme in Area V4 Current Biology DOI: 10.1016/j.cub.2011.01.013

While today, scientists who actively participate in skeptical movements and run blogs with topics which cover more than just their areas or research, are wondering about which experts would want to promote a variety of sciences to the general public and those who fund research through government organizations, they’re also not thrilled with popular scientists who cross their lines of competence. One of the experts frequently shown on what remains of The Science Channel after he wrote several books about radical ideas in bleeding edge physics, Michio Kaku, has done just that in declaring that human evolution ended, and earning the blistering fury of PZ in the process. I have to say though, the fury is not without a good justification because Kaku does seem to know an awful little about human evolution and the fact that it’s actually speeding up, insisting that our civilization has virtually ended the natural selection that’s supposed to keep us evolving, despite that just last year, the web was abuzz with a recently discovered case of significant natural selection in humans.

Now, I could just refer you to a biologist for a list of reasons as to why Kaku is wrong and leave it at that, but it would miss a bigger issue with his repetition of the canard regarding our biological future. This notion of the static human who pretty much domesticated himself, left with nowhere to go but down, appears constantly in science fiction and among the amateur techies flocking to Kurzweil-styled transhumanists, who tell them that either merging with machines or transcending our physical bodies is "the next step in our evolution," and that we’re essentially destined to become immortal as soon as the technology gets here. If you remember a very particular sci-fi show that went on way too long after its expiration date, Stargate SG1, you’ll probably recall its habit of using transcendence to immortality via some highly evolved psychic powers in episode after episode, even using it to bring characters back from the dead. And we certainly can’t forget New Age woo devotees who flock by the thousands to hear post-modernist cranks coo about "the spiritual evolution of humanity" while they liberally pepper what amounts to nonsense with trendy, sciency-sounding buzzwords, chanting "quantum" as if they are Zen Buddhists reciting their mantras during an intense mediation session.

Of course, I could cite other examples of this trope rearing its head in pop culture, but you probably see where this is headed. Human evolution’s supposed end is a very popular mistake and like many urban legends, its constant, uncritical repetition has ingrained it in a whole lot of minds, even those of scientists who really don’t follow biology or didn’t pay much attention to it during their schooling. And all too often, the media forgets that scientists actually have very, very narrow areas of expertise and the broad labels we give them often engulf a whole lot more than their actual research. A scientist we call a marine biologist might spend her entire career studying two species of squid, and one we call a theoretical astrophysicist could work only on the behavior of accretion disks around black holes for the next decade. But because they’re scientists, journalists and editors like to assume, they must be really, really smart and can give us a valid opinion on everything. It’s basically an inversion of a falsum in uno, falsum in omnibus fallacy where we assume that because someone like Kaku has a fair bit of weight in the world of exotic physics, he should also know a lot about human evolution or is a good authority on artificial intelligence and cyborgs, which by the way, he’s not. So really, I’m not surprised to see a random pop sci canard better suited for a show on whatever it is the Sci-Fi channel wants to call itself nowadays, from a scientist being asked a question out of his depth. Disappointed. But not surprised.