Archives For ray kurzweil

android chip

There’s been a blip in the news cycle I’ve been meaning to dive into, but lately, more and more projects have been getting in the way of a steady writing schedule, and there are only so many hours in the day. So what’s the blip? Well, professional tech prophet and the public face of the Singularity as most of us know it, Ray Kurzweil, has a new gig at Google. His goal? To use stats to create an artificial intelligence that will handle web searches and explore the limits of how one could use statistics and inference to teach a synthetic mind. Unlike many of his prognostications about where technology is headed, this project is actually on very sound ground because we’re using search engines more and more to find what we want, and we do it based on the same type of educated guessing that machine learning can tackle quite well. And that’s why instead of what you’ve probably come to expect from me when Kurzweil embarks on a mission, you’ll get a small preview of the problems an artificially intelligent search engine will eventually face.

Machine learning and artificial neural networks are all the rage in the press right now because lots and lots of computing power can now run the millions of simulations required to train rather complex and elaborate behaviors in a relatively short amount of time. Watson couldn’t be built a few decades ago when artificial neural networks were being mathematically formalized because we simply didn’t have the technology we do today. Today’s cloud storage ideas require roughly the same kind of computational might as an intelligent system, and the thinking goes that if you pair the two, you’ll not only have your data available anywhere with an internet connection, but you’ll also have a digital assitant to fetch you what you need without having to browse through a myrriad of folders. Hence, systems like Watson and Siri, and now, whatever will come out of the joint Google-Kurzweil effort, and these functional AI prototypes are good at navigating context with a probabilistic approach, which successfully models how we think about the world.

So far so good, right? If we’re looking for something like "auto mechanics in Random, AZ," your search assistant living in the cloud would know to look at the relevant business listings, and if a lot of these listings link to reviews, it would assume that reviews are an important past of such a search result and bring them over as well. Knowing that reviews are important, it would likely do what it can to read through the reviews and select the mechanics with the most positive reviews that really read as if they were written by actual customers, parsing the text and looking for any telltale signs of sockpuppeting like too many superlatives or a rash of users in what seems like a stangely short time window as compared to the rest of the reviews. You get good results, some warnings about who to avoid, the AI did it’s job, you’re happy, the search engine is happy, and a couple of dozen tech reporters write gushing articles about this Wolfram Alpha Mark 2. But what if, just what if, you were to search for something scientific, something that brings up lots and lots of manufactroversies like evolution, climate change, or sexual education materials? The AI isn’t going to have the tools to give you the most useful or relevant recommendations there.

First off, there’s only so much that knowing context will do. For the AI, any page discussing the topic is valid, so a creationist website savaging evolution with unholy fury and a barrage of very, very carefully mined quotes designed to look respectable to the novice reader, and the archives at Talk Origins have the same validity unless a human tells it to prioritize scientific content over religious misrepresentations. Likewise, sites discussing healthy adult sexuality, sites going off in their condemantions of monogamy, and sites decrying any sexual activity before marriage as an amoral indulgence of the emotionally defective , are all the same to an AI without human input. I shudder to think of the kind of mess trying to accomodate a statistical approach here can make. Yes, we could say that if a user lives in what we know to be a socially conservative area, place a marked emphasis on the prudish and religious side of things, and if a user is in a moderate or a liberal area, show a gradient of sound science and alternative views on sexuality. Statistically, it makes sense. In the big picture, it perpetuates socio-political echo chambers.

And that introduces a moral dilemma Google and Kurzweil will have to face. Today’s search bar takes in your input, finds what look like good matches, and spits them out in pages. Good? Bad? Moral? Immoral? Scientifically valid? Total crackpottery? You, the human, will decide. Having an intelligent search assistant, however, places at least some of the responsibility for trying to filter out or flag bad or heavily biased information on the technology involved, and if the AI is way too accommodating to the user, it will simply perpetuate misinformation and propaganda. If it’s a bit too confrontational, or follows a version of the Golden Mean fallacy, it will be seen as defective by users who don’t like to step outside of their bubble too much, or those who’d like their AI to be a little more opinionated and put up an intellectual challenge. Hey, no one said that indexing and curating all human knowledge will be easy and that it won’t require making a stand on what gets top billing when someone tries to dive into your digital library. And here, no amount of machine learning and statistical analysis will save your thinking search engine…

crysis cyborg

Ray Kurzweil, the tech prophet reporters love to quote when it comes to our coming immortality courtesy of incredible machines being invented as we speak, despite his rather sketchy track record of predicting long term tech trends, has a new book laying out the blueprint for reverse-engineering the human mind. You see, in Kurzwelian theories, being able to map out the human brain means that we’ll be able to create a digital version of it, doing away with the neurons and replacing them with their digital equivalents while preserving your unique sense of self. His new ideas are definitely a step in the right direction and are much improved from his original notions of mind uploading, the ones that triggered many a back and forth with the Singularity Institute’s fellows and fans on this blog. Unfortunately, as reviewers astutely note, his conception of how a brain works on a macro scale is still simplistic to a glaring fault, so instead of a theory of how an artificial mind based off our brains should work, he presents vague, hopeful overviews.

Here’s the problem. Using fMRI we can identify what parts of the brain seem to be involved in a particular process. If we see a certain cortex light up every time we’re testing a very specific skill in every test subject, it’s probably a safe bet that this cortex has something to do with the skill in question. However, we can’t really say with 100% certainty that this cortex is responsible for this skill because this cortex doesn’t work in a vacuum. There are hundreds of billions of neurons in the brain and at any given time, 99% of them are doing something. It would seem bizarre to get the sort of skin-deep look that fMRI can offer and draw sweeping conclusions without taking the constantly buzzing brain cells around an active area into account. How involved are they? How deep does a particular thought process go? What other nodes are involved? How much of that activity is noise and how much is signal? We’re just not sure. Neurons are so numerous and so active that tracing the entire connectome is a daunting task, especially when we consider that every connectome is unique, albeit with very general similarities across species.

We know enough to point to areas we think play key roles but we also know that areas can and do overlap, which means that we don’t necessarily have the full picture of how the brain carries out complex processes. But that doesn’t give Kurzweil pause as he boldly tries to explain how a computer would handle some sort of classification or behavioral task and arguing that since the brain can be separated into sections, it should also behave in much the same way. And since a brain and a computer could tackle the problem in a similar manner, he continues, we could swap out a certain part of the brain and replace it with a computer analog. This is how you would tend go about doing something so complex in a sci-fi movie based on speculative articles about the inner workings of the brain, but certainly not how you’d actually do that in the real world where brains are messy structures that evolved to be good at cognition, not to be compartmentalized machines with discrete problem-solving functions for each module. Just because they’ve been presented as such on a regular basis over the last few years, doesn’t mean they are.

Reverse-engineering the brain would be an amazing feat and there’s certainly a lot of excellent neuroscience being done. But if anything, this new research shows how complex the mind really is and how erroneous it is to simply assume that an fMRI blotch tells us the whole story. Those who actually do the research and study cognition certainly understand the caveats in the basic maps of brain function used today, but lot of popular, high profile neuroscience writers simply go for broke with bold, categorical statements about which part of the brain does what and how we could manipulate or even improve it citing just a few still speculative studies in support. Kurzweil is no different. Backed with papers which describe something he can use in support for his view of the human brain of being just an imperfect analog computer defined by the genome, he gives his readers the impression that we know a lot more than we really do and can take steps beyond those we can realistically take. But then again, keep in mind that Kurzweil’s goal is to make it to the year 2045, when he believes computers will make humans immortal, and at 64, he’s certainly very acutely aware of his own mortality, and needs to stay optimistic about his future…

Nowadays, it seems like Ray Kurzweil is one of the most exciting people in tech, apparently warranting a big write-up of his predictions in Time Magazine, and despite his nearly religious view of technology, named as one of the world’s most influential living atheists. And so, once in a while, we’re treated to a look at how well his prediction actually fared, often by those who’ve actually done very little research into the major disconnect between his seemingly successful predictions and reality. One of the latest iterations of almost suspiciously subtle praise for Kurzweil’s powers of prognostication from TechVert’s JD Rucker, is a perfect example of exactly that, presenting an infographic with a track record of someone who seems to have nothing less than a precognitive powers when it comes to the world of high tech. Though if you manage to catch the attribution on the bottom of the graphic itself, you’ll find that its source is none other than Ray and once again, he’s giving a very, very generous reinterpretation to his predictions and omits the myriad of details he actually got wrong.

Remember when last year, the much less lenient judges at IEEE spectrum decided to put his predictions in their proper place and evaluate how what he actually said compares to what he claims he said when grading his own predictions in retrospect? Even when simply quoting obvious trends, his record actually tends to be quite mediocre and starts out with a reasonable idea, such as that more and more computing will be mobile and web access will be close to ubiquitous, and starts adding in statements about brain implants, intelligent and web-enabled clothing, and other personal fantasies which are decades away from practical use, if they’ll actually ever be mass marketed in the first place. Then, he goes back and revises his own claims as shown by the link above, claiming that he never actually said that computers as we know them would vanish by 2010 even though in his TED presentation he said it in pretty much those exact words. Along the way, he also held that by 2009, we would’ve adopted intelligent highways with self-piloting cars. Google’s autonomous vehicle guided by sensors and GPS is still just an experiment and highways don’t manage their own traffic, unless a sign telling you about an accident or travel time to an exit counts as high tech traffic management.

So were you to do a cold reading of technology’s future ala Kurzweil, just think big, make lots of claims, and if you get something rather obvious right, just forget all the other stuff you added to your prediction and you too can be cited as a pioneer and visionary in fields you actually know little to nothing about, and said to have the kind of uncanny accuracy that makes everything you say compelling. You know, kind of like astrologers whose random wild guesses are edited down to just the vaguely right ones if we give them a lot of leeway when they manage to get something they consider to be an accurate prediction? And hey, maybe your own evaluation of your own predictive powers can also be cited by particularly lazy writers as they gush about the fantastic world of tomorrow you’ve been promising in your countless speeches and articles. The speeches and articles with which you make a good chunk of cash by hawking everything from alkaline water and vitamins for those who want to live long enough to see the Singularity, to classes at your own little university of futurism. Why study to be a real expert in AI or computing when you can just play one on TV and in the press? If anything, the pay is a lot better when you just talk about the promise of AI and intelligent machines rather than try to build them…

Since this blog is probably best known for it’s skeptical view of the strain of cyber-utopianism being promoted by professional technocrat, and apparently one of the world’s top atheists, Ray Kurzweil, it seems that I have to somehow note his appearance in Time Magazine and point out the numerous flaws in treating him like an immensely influential technology prophet with a pulse on the world of computer science. And it’s unfortunate that so many reporters seem to take him seriously because almost half a century ago, he was experimenting with some really cool machines and over the next few decades, came up with some interesting stuff. But for a while now, he’s been coasting on his own coattails, making grand pronouncements about areas of computer science in which he was never involved, and the reporters who profile him seem to think that if he can make a music machine in the 1965, it must mean that he knows where the AI world is headed, forgetting the fact that being an expert in one area of computer science doesn’t make you an expert in another. And so we’re treated to a breezy recitation of Kurzweil’s greatest hits which glosses over the numerous problems with his big plan for the world of 2045 with the good, old exponential advancement numerology that he loves to cite so often.

Again, there’s really nothing new here if you’re familiar with Kurzweil’s brand of transhumanism, just the same promises of mind uploading and digital immortality on the date predicted on the exponential chart that far too many devoted Singularitarians embrace with the same zeal as post-modernists ascribing to every concept with the word "quantum" in it. Never mind that mind uploading will require the kind of mind-body dualism based on the religious concept of a soul rather than sound science, and that even if it were possible, there would be huge adjustments involved with the process. Never mind that no matter whether Kurzweil takes a thousand vitamins a day, his body will simply fall apart by 125 because evolution does not produce humans who aren’t subject to senescence. Never mind that new inventions can backfire or never find an outlet and that the tech industry has been overpromising the benefits of what computers can do for nearly 50 years, always assuring us that the next generation of electronics would give us a nearly perfect world. Never mind that by now, more scholarly Singularitarians are trying to reign in Kurzweil’s hype while politely pointing out to whom we may want to listen instead. And never mind that Ray has a miserable record when it comes to predicting future trends in computing and technology and constantly changes what he said after the fact to give everyone the impression that he actually knows what he says he does. We’re told that every challenge to his dream world of immortal humans who swap minds between machines is easily met by the march of tech progress which will quickly add up to grant him his fantasies at just the right moment.

There’s really something borderline religious about Kurzweil’s approach to technology. He’s embraced is as his savior and his weapon to cheating death, and his devotion runs so deeply, he even says that any threats from new technology could be countered with more and better technology. But technology is just a tool, the means to an end, not an end in and of itself. It’s not something to be tamed and worshipped like an elusive or mysterious creature that works in bizarre ways, and it doesn’t work on schedule to give you what it wants. It is what you make of it and there are problems that it can’t overcome because we don’t know the solutions. Sure, being able to live for hundreds of years sounds great. But all the medical technology in the world won’t help a researcher who doesn’t know why we age and exactly what needs to be fixed or how to sufficiently and safely slow the aging process. Those kinds of discoveries aren’t done on schedule because they’re broad and very open-ended. Just saying that we’ll get there in 2030 because a chart you drew based on Moore’s Law, which was a marketing gimmick of Intel rather than an actual law of physics or technology, says so, is ridiculous to say the least. It’s like a theologist trying to calculate the day of the Rapture by digging through the Bible or the quatrains of Nostradamus’ volume. You can’t just commit scientists and researchers to work according to an arbitrary schedule so they can help you deal with your midlife crisis and come to terms with your own mortality. And yet, that’s exactly what Ray does, substituting confidence and technobabble for substance and attracting way too many reporters who just regurgitate what he says into their articles and call him a genius.

Here’s what will likely happen by 2045. We might live a little longer thanks to prosthetics and maybe artificial organs and valves which will replace some of our natural entrails when they start going out of order with age, and hopefully, better diet and exercise. We’ll have very basic AI which we’ll use to control large processes and train using genetic algorithms and artificial neural networks. We may even have a resurgence in space travel and be wondering about sending cyborgs into space for long term missions. We’ll probably have new and very interesting inventions and gadgets that we’ll consider vital for our future. But we’ll still inhabit our bodies, we’ll still die, and we’ll still find answers to the our biggest and most important problems when we find them, not according to a schedule assembled by a tech shaman. Meanwhile, I’ll be an old fogey who used to write about how Singularitarians are getting way ahead of themselves and have to face my own upcoming end as best I can, without dreaming of some magical technology that will swoop from the sky and save me because imaginary scientists are supposed to come up with it before I die and ensure my immortality. All I’ll be able to do is live out my life as best I can and try to do as many things as I want to do, hopefully leaving some kind of legacy for those who’ll come after me, or maybe a useful gadget or two with which they can also tinker. And if we manage to figure out how to stop aging before I’m gone for good, terrific. But I won’t bet on it.

[ illustration by Martin Lisec ]

Those of my readers who are lucky enough to deal with kids on a regular basis probably saw Despicable Me, and might remember how one of Gru’s little minions reacted to a child blaming it for making a mess. Well, that was my reaction when I saw that Ray Kurzweil was named one of the 25 most influential atheists alive by the editors of SuperScholar. Quite a few of their choices are very hard to dispute. Dawkins? I’m not a fan of how he’s been made out to be “the atheist messiah” in the media, but yes, he’s immensely influential. Harris, Hitchens, and PZ Myers? Yes, yes, and absolutely yes. But the Singularitarian prophet obsessed with finding the key to immortality in the digital realm and whose primary occupation today is trying to predict the tech world’s future, often incorrectly? Again, I refer you to the minion for my thoughts on the matter. I really would think that SuperScholar’s own description of the man would make them think twice about this idea.

Author, inventor, entrepreneur, and transhumanist, Ray Kurzweil sees technology as fulfilling all aspirations previously ascribed to religion, including immortality. He argues that computing will soon outstrip humans’ cognitive capacities, at which point humanity will upload itself onto a new, indestructible digital medium (an atheist version/vision of “resurrection”).

I’d like you to focus in on that whole machines fulfilling all religious aspirations thing. See any red flags? Hold on, let me help. Ray is substituting miracles and the afterlife for mysterious future technology and putting his faith on the idea that technology will exponentially advance until he transcends his flesh. He’s an atheist in the same sense as a polytheist would be to a monotheist or a dedicated UFOlogist would be a to an astronomer working at SETI. Far from doing away with religion, Ray simply adopted technology as his savior, so much so that prominent transhumanists are starting to politely cough and say that there are other people who should be getting more attention than Kurzweil and putting the smackdown on Kurzweil’s loyal disciples. And this is our 21st most influential atheist in the world? A man who expects miraculous technology to descend to him when he’s on his deathbed and grant him eternal life through the power of Moore’s Law? Are you kidding me or are the editors at SuperScholar unable to read all the religious metaphors and references they had to use in his two sentence bio to explain his worldview and what he advocates?

Honestly, Ray’s influence is in the tech world and even then, it only seems to appeal to Silicon Valley big shot financers and CEOs who are really good at talking a big game but are so woefully self-absorbed they seem unable to understand where computer science is actually going, thinking that the Next Big Thing will rotate around them. The only connection I’ve seen between Kurzweil and atheism was made by Craig James and it was made erroneously, since he basically used Ray’s techno-utopianism to fantasize about how religion will simply vanish when humans are immortal because it would have nothing to offer. SuperScholar seems to be committing the same mistake, thinking that just because someone isn’t mentioning a deity when predicting a whole lot of amazing things that suddenly makes him an atheist and that what makes him an influential one is a following he gathered from promising his followers how they too will one day enjoy eternal life thanks to the transformative power of technology. Today’s efforts in the kinds of radical life extension Kurzweil says are just a few steps away from mind uploading are much more likely to kill you than free you of your flesh but his fans still follow his techno-gospel because he’s basically promising them eternal life by 2045.

Personally, I’m not a fan of the whole religion as a virus argument set forth by Craig James for the very simple reason that he tries to boil down a rather complex psychological and societal phenomenon, and then turns it into a simple algorithm with a few fixed inputs and an output that’s almost always described as negative. Yes, there certainly are major, all too often violent downsides of religious fervor, but there’s more to religion than just a simple human impulse or an edict from an authority telling people what to believe and how. Of course, none of this means that there’s any validity in deities, but to simply call people’s beliefs a virus is reaching too far, even for an accommodationist-basher like myself. This is why when in the spirit of his approach to taking on religion James decides to go Singularitarian and argue that once we’re immortal, faith will be obsolete, I have to call for a time out for both oversimplifying why people join and stay in religious movements, and giving some of Ray Kurzweil’s overly bold and often inaccurate predictions a shout-out in the service of atheism.

old man

Let’s start from the beginning. James’ thesis is that religion is like an entity which fulfills certain human needs and survives by mutating to appeal to our urge to feel special and mitigate our fears of death. Sounds fine so far, but what is he forgetting? If you think back to every introductory psychology class you’ve ever had, Maslow’s Hierarchy of Needs was always mentioned, and one important part of that hierarchy was belonging. People all too often join religious groups not even so much because they have an unshakable faith in a certain deity or a polytheistic tradition, but because everyone around them belongs to the same churches, mosques, temples, covens, and what have you. We’re social mammals and a sense of community is important to us, which could explain one of the reasons religion evolved as a codified form of natural behaviors and radiated into all of its different forms. Were you to look at a map of religious affiliations, you’d find regions of Christians, Muslims, or Buddhists based on geopolitics and cultural hubs rather than a mish-mash of random religions scattered all across the world. A lot of faithful go to church because it’s expected of them by members of their community, and in places where church attendance is considered less important, less people go.

Another important issue is that we’re biologically inclined towards a vague feeling of belief in something that is greater than ourselves, which I would suppose is one of the brain’s adaptations towards living as a social mammal. Having to work with others to achieve big goals requires to view oneself as part of something much bigger and more important than your immediate needs and wants. Now, we’re not predisposed towards any particular faith per se, that’s something that’s usually up to the community around us or a community that we’d like to join, but we can change the level of predisposition towards belief with brain surgery and aggressive, intensive, nonstop indoctrination, often conducted for very selfish reasons. Simply put, we’re wired for some kind of belief that there’s more to the world than just us and have a need to propagate our views and opinions because we often end up investing so much time and effort into them. Again, this doesn’t make any religious view more valid than another, and certainly doesn’t mean that the view in question is even correct since much of religion is built on strong personal opinions and conformation biases rather than reproducible evidence. If we want evidence we can test in a lab and answers to big questions about nature, we have science. But there is that nagging sense of wanting to be a part of something big, a sense that has to be satisfied.

For atheists, neither desire has gone away. We try to form communities and gather into groups that share the same ideas, and we see ourselves as part of a vast universe, privileged to be here by chance and evolution. I would argue that we have good reason for how we see ourselves in the grand scheme of things and we have plenty of evidence to back up our position. But the point is that we still need to satisfy our basic needs to join a community and play a part of something bigger than ourselves. Even if one day we manage to live as long as we want and never have to fear death, these urges won’t go away and we’ll find something to replace existing religions. We may turn the idea of ancient astronauts and alien gods into a new, mainstream faith, although I’m really hoping that we won’t adopt the Scientology variant of it. We would also have to deal with those who would refuse to do whatever it would take to become immortal, protesting the very idea an abomination since their religion tells them we have to die at some point. But no, religion won’t go away just because we may one day have the privilege of unlimited lifespans through cutting edge medicine and technology.

Also, unlike James says, the odds of the first immortal being alive today are infinitesimal to nil. Life extension will thrive eventually and declaring it dead on arrival is premature at best, but the only place where humanity is even close to unlimited life is in Kurzweil’s fantasies and numerological charts, so invoking his ideas for some sort of rhetorical blow to religion is simply not sound in any way, shape or form. Actually, it’s countering religious tenets with almost pseudo-religious techno-utopianism based on wishful thinking and a belief that technology will solve all of our problems according to a timeline we find convenient. Really, there’s a reason why a number of prominent transhumanists are pulling back from Ray and his prognostications and James’ education in computer science should’ve rang a few alarm bells when he read the books…

In this month’s issue of IEEE Spectrum, digital prophet Ray Kurzweil is graded on the accuracy of his grand predictions and his powers of foresight in the tech world and the results are kind of a mixed bag. While on the one hand, he really is plugged in and has a very good feel for where technology is going, when he tries to fill in the details, he tends to go off the rails in grandiose declarations that never actually manifest themselves in the real world. Actually, as someone also in the tech world, I’d say that his detailed predictions usually talk about things we could imagine ten to thirty years ahead of the time to which he tries to pin them, though it’s a difficult task to predict what’s going to be the new tech craze several decades in the future since they depend on user preference more than anything. Just because you give users a new feature doesn’t mean they’ll ever use it, or if they do, that they’ll use it the way you want. For a recent example, take the XBOX Kinect. Intended to be just a motion controller for a gaming system, it’s now also being used as a cheap, high quality LIDAR.

But that didn’t stop Ray from boldly claiming in 2005 that today, computers as we know them should be just a collection of different devices embedded in our clothing, eyes, and phones. Since you’re probably reading this post using a computer, I’m thinking that he was wrong. Now, oddly enough, the technology to turn your phone into a fully-fledged substitute for a laptop and turning glasses into a screen does exist. It just hasn’t caught on because we’re so used to fully fledged keyboards and computer monitors. Give it a few decades and then we might talk about replacing the office PC with a high powered cell phone enabled with a holographic keyboard, and the monitor with glasses. Computers themselves, however, will never go away because we could make them a lot more powerful by using processors that future smart phones won’t be able to handle without a fan or liquid coolant. Again, the technology doesn’t have very far to go, but users are creatures of habits and once they get used to something, it takes a long time for them to change their ways. And yet, despite all the factors we just covered, Ray still insists that he was right because we have lots of smart phones in the marketplace, and he really didn’t say that computers would literally disappear by this year. Even though he actually did.

And this happens to virtually all of Kurzweil’s big predictions. Boldly ignoring what in computer science-speak is referred to as human factors, i.e. how people use or want to use a particular technology, he starts off with a very grounded, almost innocuous idea, and then tries to take it to unjustified heights. Then, after time passes and only his most conservative and grounded musings come to pass while his fantasies are as out of reach as they were when he indulged them in his books and lectures, he simply won’t admit he’s wrong. According to his own estimates, he was only wrong once and has a 95% accuracy in looking into tech’s future by giving himself a lot of leeway and very generously reinterpreting and rephrasing his predictions after the fact, as the article in IEEE documents with some of his high brow proclamations about computing, AI, and the web. He’s not shy about using definitions most techies would find very bizarre as well just to make his point, referring to software written, maintained, and periodically altered by humans in response to new business models or old problems as budding artificial intelligence in action. I don’t think you have to ask an AI expert to know that’s a really, really big stretch. In reality, Ray is so bold and aggressive in his predictions, and so loath to admit that he’s wrong, scholarly transhumanists are actually trying to distance themselves from him, especially after his asinine claim that the human brain can be reengineered in a million lines of code.

It’s predictions like this from a man who I’m constantly told is a master researcher and always puts in hours and hours of deep study into each and every idea he voices, that make it hard to take him seriously. After his long track record in developing software, he should know full well that lines of code is usually a meaningless measure unless you’re just trying to very roughly scale up the task of rewriting a program. But even after such glaring errors in his public statements, Ray is undeterred and concludes that he’s correct about virtually every one of his futuristic visions. Like all those who claim to be prophets, he puts in just a small kernel of truth into his meditations (I’d say no pun intended, but eh, why not?), and those very reserved and almost obvious little notes give his fans something to latch onto and adjust what he said after the fact. It’s a trick used throughout time and constantly exploited by psychics and astrologers to retain their audiences. Kurzweil is a very smart and very educated man rather than some New Age quack promising immortality through opening your chakra with magic berries, or drinking an amino acid cocktail, and he really should know better than than to use the tactics of woo-meisters in a bid to present himself as a visionary who can read tomorrow’s headlines.

Whenever the person most associated with transhumanism and futurism in the media, Ray Kurzweil, makes another grand pronouncement and finds himself the target of critiques by skeptics and experts, there's a very interesting thing that happens on a number of Singularitarian and transhumanist blogs. We're diplomatically told to keep in mind that Ray is very smart, very well read, always does his homework, but that his predictions can be a little too aggressive. Then, we're just as politely given a much more realistic estimate and asked to consider how far technology has come today and how likely it is we can build on it in the future to do all kinds of amazing things. The underlying message I've gotten from several prominent Singularitarians like Vassar, Anissimov, and transhumanists like my Skeptically Speaking counterpart, George Dvorsky, seems to go a little like this: there's no need to focus just on Ray's proclamations and encourage the media to do the same.

Just to drive that idea home, last week George made it a point to list some prominent, sober transhumanists and futurists on our second episode on the subject. And if you missed the show, he posted the complete list on his blog along with a very unambiguous thesis that paying far too much attention to Kurzweil is distracting people from a community of serious thinkers and researchers interested in the topic. Michael Anissimov was quick to put up a link and reiterate the message that there's much more to high tech futurism then Kurzweil's last sound bite and it would be a good idea to make note of those cited by George. And you know what? They are absolutely right. While many people today get their first exposure to Singularitarian thought from Ray and his books, he didn't create the concept. The idea that at some point in the future, something profound enough to change the world as we know it thanks to technological advances accumulating at an exponential rate was the brainchild of computer scientist and sci-fi writer Vernon Vinge, more specifically, from a vague paper he wrote for NASA in 1993. Kurzweil simply capitalized on these ideas and wrote several books.

Now, of course, whenever Ray comes out on stage and drops a profound whopper like his recent claim about reverse-engineering the human brain in a million lines of code according to his to obscure numerology, an exercise comparable with scanning an alphabet into a computer and reproducing the Oxford Dictionary, there is a need to defend him to some extent lest the core idea of trying to simulate the human brain on a computer at some point in the future be discarded as ridiculous. So the futurists and transhumanists say several vague and murky positives, then proceed to correct him because they know that neurologists are actually busy trying to reconstruct the human brain in supercomputers so they can try to find potential treatments for Alzheimer's, tumors, and ideas for how to repair brain damage. It's almost like an exercise in damage control as a media hound who managed to become a symbol of their movement to the public at large goes around promising a path to immortality by 2045, recommends taking hundreds of supplements a day to "reprogram the body," and essentially runs his own version of transhumanism, Singularity® Inc., as a lucrative business. The more coverage he gets in the media, the more books, alkaline water, vitamins, and lectures will be bought and so it makes perfect sense for him to continue being a media hound.

But on the other side of the movement are sober scholars who are genuinely interested in what's going on in the world of cutting edge technology, noticing potential trends and ideas. They are the ones you're going to be meeting at Singularity Summits, just like Skepchick's Sam Ogden discovered for himself this year. As Ray's streak of grabbing headlines with bold, often unrealistic claims which seldom show what his fans say are the nuanced, profound explorations into complex technical topics for which he's best known, continues, there's a certain urge among the more realistic and far less known Singualritarians to make their voices heard over the sensationalistic proclamations of a media messiah prophesying the Nerd Rapture. And that would be a good thing because it seems that nowadays, Ray needs Singularitarians and transhumanists a lot more than they need him. After all, to them it's not about making money and hoping to live forever, but simply exploring what's possible and rejecting the ideas that we should be afraid of technical progress, or that we are to surrender to the forces of nature, while making philosophical and epistemological excuses for doing so.

Ray Kurzweil’s recent prognostications about reverse-engineering the human brain within unrealistically short time frames have been making the rounds on the web, inspiring a web comic parody, a rant from PZ calling him the Deepak Chopra for the tech-minded, and, of course, a rebuttal from yours truly. Now, Ray responds with a clarification, claiming that he’s been taken out of context, that his talk was over-simplified, and trying to offer a more detailed and nuanced version of his predictions. Okay, fair enough. The media does get ahead of itself and tech blogs like Gizmodo are notorious for tripping over a story before they learn enough details, or its real world significance. But the problem with Ray’s response is that even with some caveats and additional details, he’s still making significant mistakes about the brain and how it works, as well as what it takes for an accurate simulation of an organ this complex, and you bet I’m going to take issue with his arguments again.

Ray’s first correction is that he expects the brain to be reverse-engineered by 2030 rather than 2020, which is really not much of an improvement to those who don’t ascribe to a numerology of exponential technological and evolutionary advancement he passionately advocates. According to him, we accomplish so much more with each passing year and decade that it’s basically like giving neurologists and computer scientists enough leeway and then some in his Singularity schedule. But how much more time? What’s the formula being used to measure how much more we’ll accomplish in a given time span compared to the previous one? Ray says those who doubt his predictions don’t understand his exponential curve, and that may be true. After all, it’s his creation and he picked all the arbitrary milestones and did his own calculations. And on top of his attempts to play the tech world’s Nostradamus, he also makes major mistakes about biology like this one…

The amount of information in the genome (after lossless compression, which is feasible because of the massive redundancy in the genome) is about 50 million bytes (down from 800 million bytes in the uncompressed genome). It is true that information in the genome goes through a complex route to create a brain, but the information in the genome constrains the amount of information in the brain prior to the brain’s interaction with its environment.

Let’s see, where do we start with this one? Remember that in my previous post on the subject, I noted that the redundancy is there due to natural selection and that simply filtering out redundancies isn’t going to be of any help in retrieving only the information you need to recreate the instructions for a brain’s growth and bottom-up development. Also, how does Ray get the 800 million byte figure? If you store each nucleotide in your DNA as a character, you’ll have to allocate two bytes per nucleobase in memory. So with 6 billion nucleobases, you’re looking at roughly 12 billion bytes or 11.7 GB of data that you’d need to save to a persistent object. Which will still be hundreds of megabytes, even after some serious compression. Why do you think experts in genomics are calling for supercomputers to analyze genetic data? It’s not because it’s so easy to parse the immense amount of information and those “redundancies,” like repeated genes and STRs are actually rather important to growth and development in ways we still don’t know because we lack both the biological knowledge and a really good algorithm for parsing nucleotide sequences in an efficient and practical manner.

With that in mind, let’s move on to Ray’s bizarre claim that the genome limits the amount of information in the brain prior to it actually growing and developing. He keeps using the word information, but I wonder what he’s actually talking about when he does. In computer science, information is anything to be stored or computed in some way, shape or form. That information comes from a database, or user input, or one of the processes of the program with which we’re working. But in biology, the genome is telling the brain key stages of how it may form based on environmental cues and genetic triggers. And those triggers and cues begin as soon as cells start to divide into a new embryo. Where’s the limit of the information? What do you gain by knowing a string of nucleobases and the amino acids they encode when it isn’t actually a strict blueprint in the same sense we’d expect from a program? And what information could be in the brain when that brain hasn’t even started to form and create connections? That’s what makes our brains what they are and that should be the focus, instead of how we’re going to take a shortcut in our efforts to reverse-engineer them by ignoring their redundancies…

For example, the cerebellum contains a module of four types of neurons. That module is repeated about ten billion times. The cortex, a region that only mammals have and that is responsible for our ability to think symbolically and in hierarchies of ideas, also has massive redundancy. It has a basic pattern-recognition module that is considerably more complex than the repeated module in the cerebellum, but that cortex module is repeated about a billion times. There is also information in the interconnections, but there is massive redundancy in the connection pattern as well.

You know, for someone who says he studied this topic for four decades and is supposed to be up on virtually everything new in neurology and computer science, it’s pretty amazing that Ray is suggesting that all of those redundant connections could be ignored to get the structure of the brain and derive how it works on a neuron by neuron level. See, those redundancies are associated with higher cognition, and every living thing has a certain degree of redundancy as dictated by evolutionary processes. So what Ray is suggesting here is a terrific plan not to study what actually allows us to develop self-awareness and intellect, which we could really only get by studying the entire growth and development of the brain from day one. There are no shortcuts here and if Ray actually took the time to follow what neurologists and biologists say about these redundancies and how important their linking seems to be for high-level and complex brain functions, he would know that. Along the way, he also would’ve realized the true scope of the challenge. But then again, he really wants to become immortal so facing our limitations would also mean facing his fears. And he’s just not ready to do that.

addendum 08.24.2010: Okay, so it looks like I missed that Ray was thinking of a two bit data type for the base pairs, which really would yield an appreciably smaller file. That said, his reasoning behind the million lines of code it would take to simulate a human brain is still wrong (please see comments for elaboration as to why), and considering that we would only capture a sequence of amino acids, we would still need far more data. In fact, as biologists responding to Kurzweil have pointed out, you can’t derive a brain from the genome, and we can’t even derive complete micro structures from proteins yet because of all the complex interactions we have to take into account, interactions that depend on the development and the environment of the organism rather than its genome. Some of Ray’s defenders say that he’s not actually proposing to derive the brain from protein sequences, but if that’s the case, why even bother with DNA at all? Those amino acids encode what proteins are to be made and how, so it’s how these proteins interact that’s of vital importance if you want to understand how a brain is built during development, not just what amino acids being encoded with no additional context.

[ illustration by Goro Fujita, spotted on io9 ]

The prophet and general of the Technological Singularity, Ray Kurzweil, has come down from his mountains of supplements, pausing from his musings on how technology could never, ever harm us and his plan for immortality in three easy steps, to deliver another prediction. By the year 2020, he proclaims, our brains will be reverse-engineered in their entirety, reduced to just a million lines of code. As per his usual mantra, any missing technology or missing knowledge to make this happen will be met by the almighty exponential curve of progress, his arbitrary chart of technocratic quasi-Lamarckism, and the reasoning behind the required theoretical framework for this sort of bold claim is almost childishly simplistic. Steadily but surely, Kurzweil is becoming a priest of a utopian futurism rather than an ambitious visionary, and his proclamations are turning more and more into a comic book caricature of computer science, lacking any regard for even basic biology.

So in what exactly do Kurzweil and his supporters ground their claim that a million lines of code would render an entire human brain? Considering that a piece of decent image editing software takes several million lines of code to program, we’re talking about a portable, digital brain the instructions for which could easily fit on an average thumb drive a hundred times over. According to Kurzweil, our genome has the all instructions for how our bodies build a brain. Compress the information in our DNA down to 50 MB by removing redundancies as well as unnecessary clutter, assume that about half of that is the brain, do a little basic numerology relating a certain line of code to a certain amount of bit and bytes needed to execute it, and presto! You have a brain in a million lines of code or so. This is what computer scientists classify under the highly technical term “bupkis,” and discard as a product of an inflamed imagination. But why, you may ask, is this prediction not even wrong, and where exactly does it go astray? The answer? Just about everywhere.

First and foremost, let’s consider the idea that the design for our brain takes up half our DNA and is stored in certain genes we could just decipher and use to build a perfect digital replica. This conception of how genes work to assemble the body might be passable in the 1970s or so on the pop science circuit, but today, many of us are keenly aware that this is really not the case. Genes provide probabilities and potentialities, and they change due to mutations, epigenetics, and environmental effects. How the brain grows, develops, and ages over time is what determines how the brain will ultimately wire itself. Grabbing a genetic blueprint sounds like an easy solution proposed by someone unaware of the scope of the actual problem. In reality, just knowing a sequence of base pairs participating in the development of the nervous system is only a small part of a really big and complex story. You also need to know the developmental sequence, the role of environmental effects, and all the intricacies of how neurons come together, start firing, and shape a new mind. All knowing how the genes are laid out will do is allow you to list the amino acids and proteins they generate in order.

Secondly, when Kurzweil talks about removing redundancies in the human genome, does he realize that he’d be messing around with potentially important regulators that might play a role in development? Sure, we have quite a bit of leftover junk in our DNA from out evolutionary past. However, would you trust someone like Ray to decide what looks important and what doesn’t? And on top of that, some of these useless genes could get an encore, getting re-activated and serving a new and useful function, affecting the development of neurons and how they connect to each other. Biological systems are very fluid. You can’t simply treat something that we’re not currently using as a simple matter of garbage collection, like a variable you declared and initialized while never actually using it. So far, what we have from Kurzweil is a plan to read a genome, map out the parts that play a role in the development of the nervous system and the brain, discard anything he doesn’t see as being all that important or necessary, then somehow turning the end result into a virtual brain. Without knowing the approximate bottom-up development sequence which biologists are still trying to figure out.

Finally, I’m just curious, since when has Ray become an expert in artificial intelligence? I haven’t seen papers or presentations from him on the matter other than monotone incantations of his self-indulgent chart plotting the exponential advancement of life from amoeba to the Supreme AI of 2045 and the subsequent Rapture of High Tech. Come on Gizmodo, don’t go down the Daily Galaxy’s path and assign superfluous titles to those who lack the advertised expertise. Yes, Ray created voice and optical recognition systems, and I’m sure he is, and should be, very proud of them. But as I’m trying to work on real world AI issues like machine vision, I’ve found zero papers on the subject from anyone in the Singularity Institute. Same goes with those who work on natural language processing and evolutionary behaviors. In fact, the most significant Singularity endorsed paper I’ve read barely even mentioned machine intelligence by design. Could we do Ray a favor and have a little talk with him to explain why all his grandiose declarations and claims of expertise in an area of computer science where his involvement is merely rhetorical are turning him into a side show barker of futurism? And while we’re at it, maybe tell Gizmodo not to breathlessly repeat his asinine claims?