will all the aliens we meet end up being robots?
After getting the Moderna double tap and splurging on some nicer seats on my trip across the country, I decided to relax with a good book. Well, as much as one can relax in a pressurized metal tube thousands of feet in the sky surrounded by masked strangers during a pandemic. The book in question? The Zoologist’s Guide to the Galaxy, in which Arik Kershenbaum tries to educate a wide audience about the foundational principles of biology and intelligence, and how those might apply to alien life. It’s a good, substantive read which did an admirable job with its speculative subject matter, but one chapter in particular made me furrow my brow, a chapter musing on the potential for artificial intelligence to become its own lifeform.
This notion is nothing new in discussions about astrobiology and SETI, and nowadays, even popular science videos feature intelligent machines as the best candidates to dominate life in the universe. In a way it makes sense. Because we can’t easily jump to other worlds being the fragile meat sacks we are, we’re sending robots around the solar system and beyond. We’re even using AI to help them remotely navigate obstacles and it’s just a matter of time before all those machine learning models are embedded into the rovers, satellites, and flyers we send. If we really invest the time and effort, those machines will develop a fair bit of independence and executive function. After that, who knows, right?
Personally, I’m quite skeptical. Consider that for all the amazing claims of what AI is capable of, it’s a) typically fallen far short of its claimed potential in the real world, b) focuses on solving very narrow and specific problems, and c) remains little more than a tool to offload computationally intensive statistical formulas with a dash of calculus to help them figure out how to guess their way to the right answer. The first matter can be addressed by more, better, and properly thought out training. The second, by figuring out how to combine hyper-focused models into complex behaviors, something seen in nature — and my personal fixation when it comes to the subject of AI. But the third issue seems insurmountable barring a revolutionary breakthrough.
why artificial intelligence needs organic oversight
If we recognize that artificial intelligence is just a tool, no matter how impressive, a great deal of fanciful predictions must fall away. Most of the ideas about the supposedly unlimited potential of robots and AI come from flawed analogies comparing calculus and statistics to the human brain. Many others are rooted in knowing little beyond neat little demos, papers, and TED speeches that talk a big game but ultimately vanish never to be heard from again. Hell, a lot of companies and labs rushing to present themselves as artificial intelligence pioneers aren’t even using what computer scientists would define as machine learning. Our popular conception of AI is rooted less in fact and more in the modern version of retro-futuristic utopianism.
Nick Bostrom, one of the leading voices among the chorus of those warning us of an imminent takeover by a machine super-intelligence that will boggle our minds, insists that artificial life will be more efficient. It won’t need to bluff or play or waste time on idle hobbies or tinkering. It will be a hyper-focused engine for galactic conquest. In a way, he’s probably right despite his very fuzzy and probably erroneous assumption of how a super-intelligent machine could come about. Yet what he sees as strengths are actually fatal weaknesses. Playing, boredom, downtime, and complex, subtle communication are signs of creativity, and creativity is instrumental for survival and adaptation.
One could probably even argue that humans are as intelligent as they are not because we have language, know things, write them down, and plan, but because our brains have a whole lot of excess bandwidth we can channel into creative pursuits, many of which can result in adding on to all of the above, constantly enriching and advancing our species in the end. Because an AI has no need for play or creativity, and would need to be specifically designed to engage in it after many experiments to establish how it would go about being creative, it can’t emerge on its own unless whoever set up its modeling made a fortunate error, then decided that a bug was really a welcome feature. In short, something has to create it and keep it plodding along.
how would aliens use their machines?
At some point, because it’s just a very fancy calculator, an AI would get stuck and need more training and upgrades, which an organic intelligence will have to direct. And because AI has such hard limitations — unless it’s built to mimic life as we understand it, a very dangerous and possibly suicidal idea — it will most likely remain a tool. Which brings us to another scenario for what kind of advanced aliens we may encounter at some point in the future. Perhaps, much like us, they’ve built spaceships and computers, developed AI tools, and now want to explore the cosmos. And much like us, they could consider becoming cyborgs and wiring AI right into their minds, something we’re also seriously thinking about doing.
Of course, this is not to say that in an argument of whether intelligent, space faring aliens will likely be biological or mechanical we should compromise on cyborgs. We don’t know enough to make that assertion and the aliens’ choices would be profoundly affected by their biology and culture. They may think that modifying their bodies with machines is a sacrilege or be willing to die in space as long as their descendants make it to their target world. But what we can say is that given what we know about mathematics, computers, and the limitations of technology, if we do run into alien robots, they’ll be very much like our probes: ambassadors from another world rather than independent lifeforms on a mission to populate the cosmos.