come on ray, now it’s just getting embarrassing

August 18, 2010

The prophet and general of the Technological Singularity, Ray Kurzweil, has come down from his mountains of supplements, pausing from his musings on how technology could never, ever harm us and his plan for immortality in three easy steps, to deliver another prediction. By the year 2020, he proclaims, our brains will be reverse-engineered in their entirety, reduced to just a million lines of code. As per his usual mantra, any missing technology or missing knowledge to make this happen will be met by the almighty exponential curve of progress, his arbitrary chart of technocratic quasi-Lamarckism, and the reasoning behind the required theoretical framework for this sort of bold claim is almost childishly simplistic. Steadily but surely, Kurzweil is becoming a priest of a utopian futurism rather than an ambitious visionary, and his proclamations are turning more and more into a comic book caricature of computer science, lacking any regard for even basic biology.

So in what exactly do Kurzweil and his supporters ground their claim that a million lines of code would render an entire human brain? Considering that a piece of decent image editing software takes several million lines of code to program, we’re talking about a portable, digital brain the instructions for which could easily fit on an average thumb drive a hundred times over. According to Kurzweil, our genome has the all instructions for how our bodies build a brain. Compress the information in our DNA down to 50 MB by removing redundancies as well as unnecessary clutter, assume that about half of that is the brain, do a little basic numerology relating a certain line of code to a certain amount of bit and bytes needed to execute it, and presto! You have a brain in a million lines of code or so. This is what computer scientists classify under the highly technical term “bupkis,” and discard as a product of an inflamed imagination. But why, you may ask, is this prediction not even wrong, and where exactly does it go astray? The answer? Just about everywhere.

First and foremost, let’s consider the idea that the design for our brain takes up half our DNA and is stored in certain genes we could just decipher and use to build a perfect digital replica. This conception of how genes work to assemble the body might be passable in the 1970s or so on the pop science circuit, but today, many of us are keenly aware that this is really not the case. Genes provide probabilities and potentialities, and they change due to mutations, epigenetics, and environmental effects. How the brain grows, develops, and ages over time is what determines how the brain will ultimately wire itself. Grabbing a genetic blueprint sounds like an easy solution proposed by someone unaware of the scope of the actual problem. In reality, just knowing a sequence of base pairs participating in the development of the nervous system is only a small part of a really big and complex story. You also need to know the developmental sequence, the role of environmental effects, and all the intricacies of how neurons come together, start firing, and shape a new mind. All knowing how the genes are laid out will do is allow you to list the amino acids and proteins they generate in order.

Secondly, when Kurzweil talks about removing redundancies in the human genome, does he realize that he’d be messing around with potentially important regulators that might play a role in development? Sure, we have quite a bit of leftover junk in our DNA from out evolutionary past. However, would you trust someone like Ray to decide what looks important and what doesn’t? And on top of that, some of these useless genes could get an encore, getting re-activated and serving a new and useful function, affecting the development of neurons and how they connect to each other. Biological systems are very fluid. You can’t simply treat something that we’re not currently using as a simple matter of garbage collection, like a variable you declared and initialized while never actually using it. So far, what we have from Kurzweil is a plan to read a genome, map out the parts that play a role in the development of the nervous system and the brain, discard anything he doesn’t see as being all that important or necessary, then somehow turning the end result into a virtual brain. Without knowing the approximate bottom-up development sequence which biologists are still trying to figure out.

Finally, I’m just curious, since when has Ray become an expert in artificial intelligence? I haven’t seen papers or presentations from him on the matter other than monotone incantations of his self-indulgent chart plotting the exponential advancement of life from amoeba to the Supreme AI of 2045 and the subsequent Rapture of High Tech. Come on Gizmodo, don’t go down the Daily Galaxy’s path and assign superfluous titles to those who lack the advertised expertise. Yes, Ray created voice and optical recognition systems, and I’m sure he is, and should be, very proud of them. But as I’m trying to work on real world AI issues like machine vision, I’ve found zero papers on the subject from anyone in the Singularity Institute. Same goes with those who work on natural language processing and evolutionary behaviors. In fact, the most significant Singularity endorsed paper I’ve read barely even mentioned machine intelligence by design. Could we do Ray a favor and have a little talk with him to explain why all his grandiose declarations and claims of expertise in an area of computer science where his involvement is merely rhetorical are turning him into a side show barker of futurism? And while we’re at it, maybe tell Gizmodo not to breathlessly repeat his asinine claims?

Share
  • Tartessos

    “…pausing from his musings on how technology could never, ever harm us…”

    I guess you did not read The Deeply Intertwined Promise and Peril of GNR (you know, Chapter 8 of _The Singularity Is Near_). Do you just rant about things you don’t like without gathering actual knowledge about what you are criticizing? It seems like it.

    “Secondly, when Kurzweil talks about removing redundancies in the human genome, does he realize that he’d be messing around with potentially important regulators that might play a role in development?”

    Do you know what Kurzweil actually said about removing redundancies in the human genome? Your rant seems to indicate you don’t. Feel free to prove me wrong, though.

  • Greg Fish

    Do you just rant about things you don’t like without gathering actual knowledge about what you are criticizing?

    Do you actually have an argument, or do you just want to keep on quoting Kurzweil and deferring to Moore’s Law in defense of his exponential chart? Do you understand that Moore’s Law only applies to the price and computational power of a computer, not of all technology, that it’s actually a very casual standard to which chip makers artificially held themselves thanks to an Intel marketing gimmick, and that we’re actually reaching the limits of the concept?

    Oh and by the way, there’s a link to a post in which he’s quoted refuting the notion of someone using technology to harm others at the opening of the Singularity University classes. His argument? As soon as someone invents harmful technology, someone will invent a countermeasure. If it’s too much of a bother to check the link for yourself in the posts, here you are. Paragraphs five and six.

    Do you know what Kurzweil actually said about removing redundancies in the human genome. Your rant seems to indicate you don’t

    Your outburst seems to indicate you weren’t paying any attention to the part where my objection applies. If you’re going to try and reconstruct the blueprint for the brain from DNA, you can’t just delete the digital representation of genes which seem redundant because they may play a role in a development sequence.

  • http://www.meetup.com/london-futurists Richie

    The older they get the wackier the predictions!

  • http://softwetware.blogspot.com Chris

    To be fair to Kurzweil, OCR *was* considered an AI problem back when he worked on it. So was the piano-concerto generator he wrote as a teenager. Lots of problems were considered AI until we got close to solving them — chess, translation, face recognition.

    http://en.wikipedia.org/wiki/AI_effect

    And if your complaint is that he hasn’t done any work on *general* AI that isn’t purely theoretical and speculative… well, neither has anyone else.

  • Greg Fish

    OCR *was* considered an AI problem back when he worked on it.

    Well, it still technically is because you generally train an ANN to recognize the letters after going a little basic processing done to isolate each one with a line detection algorithm.

    And if your complaint is that he hasn’t done any work on general AI that isn’t purely theoretical and speculative…

    He worked on pattern recognition problems which do tie into AI in general, true, but my complaint is not that he doesn’t meet my definition of AI work. He hasn’t produced any work that shows familiarity with new ideas and modern directions in the field.