[ weird things ] | how to tame your superintelligent robot…

how to tame your superintelligent robot…

Programmer Maciej Ceglowski refuses to fall for the Gospel of the Nerd Rapture and offers a novel idea for giving people a more realistic idea of what AI really is.
post apocalyptic bot
Illustration by Mike Garn

Web developer and entrepreneur Maciej Ceglowski gave an unusual keynote speech for a web dev conference in Zagreb, Croatia this past October. If you don’t want have the time or opportunity to watch it, he was kind enough to post the complete transcript and slides on his blog for your convenience. So what made it so unusual? Rather than promote a new tool or approach that the keynote speaker thinks will or should take the industry by storm, as per usual for such events, Ceglowski decided to critique Singularitarians’s ideas about the likelihood that we might end up building an AI system far smarter than any human and to which said lesser things made of flesh may become a nuisance to eliminate as it tries to accomplish its goals. But wait, why would this be a thing he decided to dive into? Well, billing himself as a big critic of Silicon Valley and its culture — which to many seems to be American tech in general and anyone famous in the industry in particular — he noticed that a few tech icons have been outspoken about the potential risks AI may pose in the future, and decided to rebuke them in his keynote.

Oddly enough, his line of attack compares their vague worries, which might be entirely justified, with the work of philosopher Nick Bostrom whose claim to fame is writing papers about how computers will grow to be so intelligent that humanity may be powerless to stop them. He’s far from the only person in the news to espouse the same views. There’s Michael Anissimov, who hits many of the same points as Bostrom, George Dvorksy, who took these points to brand new heights, the favorite AI theorists of Less Wrong, and of course, the original disciple of Vernor Vinge, the Giorgio Tsoukalos to his Erich von Däniken, if you will, Ray Kurzweil and his theories of digital immortality by way of creating superintelligent AI. Yes, we’re back in my favorite and best known stoping grounds, and as you can probably tell from all of those links, I’ve tackled the same topics again and again because they surface again and again in popular science articles about the future of artificial intelligence. It seems that people really want to believe that we’re living right on the verge of comic book plot lines about self-aware supercomputers and downloading our minds into machine bodies becoming our reality.

And this is what struck me most about this talk. All of these arguments are not new. I’ve been writing about this, appearing on podcasts, and rebutting the same exact arguments since 2008 because a whole lot of people certain that they know a lot more about intelligence and the future than they really do as evidenced by their inability to wade past abstract generalities in most debates on the subject, don’t want to accept that they’re wrong, that we’re not necessarily on the precipice of magic technology that will solve virtually all of our problems if we tame it, and we can screw it up at any time. By, oh say having an electoral system which gives people who still live in the past, mentally speaking, outsized political power and the ability to massively skew our priorities. Our entire research and development infrastructure to turn a lot of really cool sci-fi ideas into reality is incredibly precarious and relies on massive government subsidies, and those governments have to deal with a whole lot of fickle voters who thunk money not spent directly on them in the form of roads, healthcare, and emergency services is wasted.

Incidentally, for Ceglowski, the long term solution to breaking Singularitary fever is actually better, more realistic sci-fi that teaches its readers about the limits of what’s actually possible in the real world, teaching them a little bit of very necessary humility, and warning that if people don’s start heeding the skeptical consensus of AI experts, too many people who could’ve made a real difference in the tech world will be trapped trying to do the impossible because they’ve been taken in by what he calls string theory for comp sci. If we didn’t just see what happens when people go with their guts and choose to believe what they want to believe instead of listening to the consensus of subject matter experts, I’d say that Ceglowski may be exaggerating just how harmful blind faith in the Singularity might be. But after 2016, I’m not sure that one can understate the dangers of ignoring skeptical voices and losing the battle for the hearts and minds of tomorrow’s believers in what’s trendy rather than what’s backed up by more facts and concrete data…

# tech // artificial intelligence / computer science / singularity


  Show Comments