[ weird things ] | come on ray, now it’s just getting embarrassing

come on ray, now it’s just getting embarrassing

The Prophet of the Singularity returns with a hopeful sermon on our impending computer-aided immortality.

The prophet and general of the Technological Singularity, Ray Kurzweil, has come down from his mountains of supplements, pausing from his musings on how technology could never, ever harm us and his plan for immortality in three easy steps, to deliver another prediction. By the year 2020, he proclaims, our brains will be reverse-engineered in their entirety, reduced to just a million lines of code. As per his usual mantra, any missing technology or missing knowledge to make this happen will be met by the almighty exponential curve of progress, his arbitrary chart of technocratic quasi-Lamarckism, and the reasoning behind the required theoretical framework for this sort of bold claim is almost childishly simplistic. Steadily but surely, Kurzweil is becoming a priest of a utopian futurism rather than an ambitious visionary, and his proclamations are turning more and more into a comic book caricature of computer science, lacking any regard for even basic biology.

So in what exactly do Kurzweil and his supporters ground their claim that a million lines of code would render an entire human brain? Considering that a piece of decent image editing software takes several million lines of code to program, we’re talking about a portable, digital brain the instructions for which could easily fit on an average thumb drive a hundred times over. According to Kurzweil, our genome has the all instructions for how our bodies build a brain. Compress the information in our DNA down to 50 MB by removing redundancies as well as unnecessary clutter, assume that about half of that is the brain, do a little basic numerology relating a certain line of code to a certain amount of bit and bytes needed to execute it, and presto! You have a brain in a million lines of code or so. This is what computer scientists classify under the highly technical term “bupkis,” and discard as a product of an inflamed imagination. But why, you may ask, is this prediction not even wrong, and where exactly does it go astray? The answer? Just about everywhere.

First and foremost, let’s consider the idea that the design for our brain takes up half our DNA and is stored in certain genes we could just decipher and use to build a perfect digital replica. This conception of how genes work to assemble the body might be passable in the 1970s or so on the pop science circuit, but today, many of us are keenly aware that this is really not the case. Genes provide probabilities and potentialities, and they change due to mutations, epigenetics, and environmental effects. How the brain grows, develops, and ages over time is what determines how the brain will ultimately wire itself. Grabbing a genetic blueprint sounds like an easy solution proposed by someone unaware of the scope of the actual problem. In reality, just knowing a sequence of base pairs participating in the development of the nervous system is only a small part of a really big and complex story. You also need to know the developmental sequence, the role of environmental effects, and all the intricacies of how neurons come together, start firing, and shape a new mind. All knowing how the genes are laid out will do is allow you to list the amino acids and proteins they generate in order.

Secondly, when Kurzweil talks about removing redundancies in the human genome, does he realize that he’d be messing around with potentially important regulators that might play a role in development? Sure, we have quite a bit of leftover junk in our DNA from out evolutionary past. However, would you trust someone like Ray to decide what looks important and what doesn’t? And on top of that, some of these useless genes could get an encore, getting re-activated and serving a new and useful function, affecting the development of neurons and how they connect to each other. Biological systems are very fluid. You can’t simply treat something that we’re not currently using as a simple matter of garbage collection, like a variable you declared and initialized while never actually using it. So far, what we have from Kurzweil is a plan to read a genome, map out the parts that play a role in the development of the nervous system and the brain, discard anything he doesn’t see as being all that important or necessary, then somehow turning the end result into a virtual brain. Without knowing the approximate bottom-up development sequence which biologists are still trying to figure out.

Finally, I’m just curious, since when has Ray become an expert in artificial intelligence? I haven’t seen papers or presentations from him on the matter other than monotone incantations of his self-indulgent chart plotting the exponential advancement of life from amoeba to the Supreme AI of 2045 and the subsequent Rapture of High Tech. Come on Gizmodo, don’t go down the Daily Galaxy’s path and assign superfluous titles to those who lack the advertised expertise. Yes, Ray created voice and optical recognition systems, and I’m sure he is, and should be, very proud of them. But as I’m trying to work on real world AI issues like machine vision, I’ve found zero papers on the subject from anyone in the Singularity Institute. Same goes with those who work on natural language processing and evolutionary behaviors. In fact, the most significant Singularity endorsed paper I’ve read barely even mentioned machine intelligence by design. Could we do Ray a favor and have a little talk with him to explain why all his grandiose declarations and claims of expertise in an area of computer science where his involvement is merely rhetorical are turning him into a side show barker of futurism? And while we’re at it, maybe tell Gizmodo not to breathlessly repeat his asinine claims?

# tech // artificial intelligence / computer science / ray kurzweil / technological singularity

  Show Comments