feared are the machines: for they shall inherit the universe?

Could alien AI be the dominant life form in the universe instead of its organic creators?
alien robot drones
Illustration by Alexey Andreev

Personally, I’m a big fan of Ray Villard’s columns because he writes about the same kind of stuff that gets dissected on this blog and the kind of stuff I like to read. Since most of it is wonderfully and wildly speculative, I seldom find something with which to really disagree. But his latest foray into futurism inspired by Cambridge University’s Center for the Study of Existential Risk’s project trying to assess the danger artificial intelligence poses to us is an exception to this rule. Roughly speaking, Ray takes John Good’s idea of human designing robots better at making new robots than humans and runs with it darkest adaptations in futurist lore. His endgame? Galaxies ruled not by “thinking meat” but by immortal machinery which surpassed its squishy creators and built civilizations that dominated their home worlds and beyond. The cosmos, it seems, is destined to be in the cold, icy grip of intelligent machinery rather than a few clever space-faring species.

To cut straight to the heart of the matter, the notion that we’ll build robots better at making new and different robots than us is not an objective one. We can certainly build machines that have more efficient approaches and can mass produce their new designs faster than us. But when it comes a nebulous notion like “better,” we have to ask in what way. Over the last century, we’ve really excelled at measuring how well we do in tasks like math, pattern recognition, or logic. With concrete answers to most problems in these categories, it’s fairly straightforward to administer a test heavily emphasizing these skills and comparing the scores among the general populace. In dealing with things like creativity or social skills, things are much harder to measure and it’s easy to end up measuring inconsequeantial things as if they were make or break metrics, or give up on measuring them at all. And the difficulty only goes up when we consider context.

We can complicate the matter even further when we start taking who’s judging into account. To judges who aren’t very creative people and never have been, some robots’ designs might seem like feats beyond the limits of the human imagination. To a panel of artists and pro designers, a machine’s effort at creating other robots might seem nifty but predictable, or far too specialized for a particular task to be useful in more than one context. To a group of engineers, having the ability to design just-for-the-job robots might seem just the right mix of creativity and utility, even though they’d question whether this isn’t just a wasteful design. If you’re starting to get fuzzy on this hypothetical design by machine concept, don’t worry. You’re supposed to be since grading designs without very specific guidelines is basically just a matter of personal taste and opinion where trying to inject objective criteria doesn’t help in the least. And yet the Singularitarians who run with Good’s idea expect us to assume that this will be an easy win for the machines.

This unshakable belief that computers are somehow destined to surpass us in all things as they get faster and have bigger hard drives is at the core of the Singulatarianism that gives us these daramatic visions of organic obsolesence and machine domination of the galaxy. But it’s wrong from the ground up because it equates processing power and complexity of programming with a number of cognitive abilities which can’t be objectively measured for out entire species. Humans are no match for machinery if we have to do millions of mathematical calculations or read a few thousand books in a matter of days. Machines are stronger, faster, and immune to things that’ll kill us in a heartbeat. But once we get past measuring FLOPS, upload rates, and spec sheets on industrial robots, how can we argue that robots will be more imaginative than us? How do we try to explain how they’ll get there in more than a few Singularitarian buzzwords that mean nothing in the world of computer science? We don’t even know what makes a human creative in a useful or appreciable way. How would we train a computer to replicate a feat we don’t understand?

# tech // artificial intelligence / computer science / technological singularity

  Show Comments