[ weird things ] | michael vassar vs. weird things, round three

michael vassar vs. weird things, round three

Michael Vassar is back for the third and final installment of the Weird Things Singularity debate.
cyberpunk hideout
Illustration from Cyberpunk 77

And so, the final episode of my debate with Singularity Institute’s president Michael Vassar is finally here. This post builds up on the themes started in the first and second installments and tackles the issues of logistics, feasibility and computing power. While the computer nerd in me would love to debate these things all day, I’m just going to stick to the important points here, starting with the one that’s the most crucial in my opinion: how much detail is missing from the visions outlined by Singularitarians and why it matters.

Greg, you claim on multiple occasions that Kurzweil “doesn’t bother with the details.” This seems like a poor characterization of a person who’s written a copiously referenced six or seven hundred page book spelling out his ideas in far more detail than most readers of popular science books want. Of course he doesn’t spell out each of the hundreds of thousands of patentable ideas that will probably go into making his vision a reality. If he could do that we would already be there.

Yes, if Ray could really create an immortal cyborg out of a human he would be in line for the Nobel Prize and all of his treatments, books and inspirational speeches about creating these technologies to advance human society would be completely unnecessary. But at the same time, the length of a book how thick the reference pages are, doesn’t necessarily make for detail. Kurzweil’s books focus on exhaustively explaining some of the current efforts relevant to his ideas and extrapolate them into the future leaving the functional requirements of the implementation rather vague and fuzzy.

Now this is all fine and good for a popular science book. Hell, I’ve glossed over some details myself in posts for the sake of brevity so I certainly can’t blame Ray for doing the exact same thing. However, since my area of technical expertise deals with turning ideas into requirements and design specifications, what works fine in a popular science book falls far short of what’s required for serious development projects. It’s one thing to talk about an idea you’re been mulling in your head. Investing money and effort into getting that idea made into a real project is a very different kettle of fish. And considering the sheer amount of metaphysics that seems to go into the final stages of his plan, there will be limits to what technology could do for him.

We would have a much more reasonable public discourse if there wasn’t an implicit rule against talking about the future with full recognition that technologies will become more powerful in a wide variety of ways over time.

Since when was there such a rule? If it was really in place, we’d need to shut down half of all popular science and military blogs for countless violations of this implicit gag order. Nobody says that computers are frozen in time as of today and technologies will never get drastically more powerful. It’s just that people who have some grasp of the specifics involved have doubts about the timelines and extraordinary capabilities being assigned to future computers and electronic devices. There’s no way I would ever work in the tech realm if I thought it’s a dead field and there’s no potential for revolutionary developments in computing over my lifetime.

In “Waiting for the Dawn of Artificial Intelligence”, you say, in the first paragraph, that its popular among Singularitarians to believe that sufficiently powerful computers will automatically become intelligent. I don’t know of a single Singularitarian who would agree with this claim. It sounds like an idea from very retro science fiction.

Actually, the post says no such thing. Instead, it points to casual remarks by Singularitarians than when we’ll create sufficiently powerful computers, they’ll be able to behave like intelligent entities. Almost every assertion I’ve seen about artificial intelligence is tied in with processing speed as either a vital ingredient or an enabler, changing the way computers are processing data in new and unexpected ways.

Immediately afterwards you say that Singularitarians think that thinking, self-aware machines are “around the corner”.

It was a very common turn of phrase used to indicate something on the horizon. Sinister intentions don’t hide in every expression. Somehow I highly doubt that readers truly thought that Singularitarians were waiting for a sapient computer to come off the line any minute now. Same applies to my play on words about Ray’s ideas for how fast computers will need to be in order to accommodate human minds. It was intended to illustrate a belief that computing power can simulate brainpower. Admittedly there is a little glossing over going on here but since not all my readers are willing to read a 2,000 word lecture on processing speed calculations, this is something I have to do to keep the post flowing and convey the main points.

Whenever I write a post about the Singularity concept, I try to point out that in general, the idea is plausible and definitely worthwhile in perusing. All my critiques are there to point out the challenges and issues we will have to address to put this hypothesis into practice, not to disparage it or somehow try to show that it’s simply never going to happen. However, the cavalier attitude with which Ray Kurzweil and his partners approach such a big endeavor and the way they wring money out of indulging their audiences’ transhumanist dreams, irks me as both a techie and a skeptic.

# tech // computer science / futurism / technological singularity


  Show Comments