how not to download a sinister alien a.i.
George Dvorsky, who you may remember from our appearances on Skeptically Speaking when we debated the science and technology behind transhumanist ideas, recently posted his top ten stories of the year. And to my surprise, there were a number of posts about the search for alien life, including a warning about an evil extraterrestrial intelligence tricking curious aliens into building a doomsday machine from physicist Alexey Turchin, a scenario George describes as Contact in reverse. It’s a pretty neat idea for a sci-fi novel, but from a computer science standpoint, I would say that it’s a warning we can pretty safely disregard. Why? Remember my quick overview about why communicating with alien AI or machinery is nearly impossible? Building an advanced, highly sophisticated device created and programmed by extraterrestrials from blueprints written in some derivative of an alien language or otherworldly illustrations comes with a very similar set of problems.
Let’s say that we do detect a signal from a distant world and want to try and interpret it. We could assume that the information the aliens are trying to transmit is in binary format because quite frankly, that’s the easiest way to communicate. You just flash the pulses on and off in an appropriate sequence. That’s how computers work when we get right down to the circuit level, turning tiny switches on and off billions of times per second as the signals make their way through logic gates. But we can turn our computers’ binary sequences into useful and readable information because over the decades, we’ve adopted dozens of standards for what each sequence of pulses should signify. On top of that, we also created standards by which we exchange and navigate all this data, standards that alien SETI researchers are extremely unlikely to know. Where we to run a random signal from the stars through a standard program that converts binary to text, we’re more than likely to get gibberish that bears no resemblance to any language, though we may uncover some vague logical patterns. Would they be enough for us to guess what the aliens might mean? A simple hello sent this way might take years to fully decipher, much less instructions to assemble and program something like an alien AI. In fact, by the time we know enough to start the project, our species might be extinct or too preoccupied with our affairs.
And yes, when you’re talking about something like artificial intelligence, there will be code involved. How does an alien programmer compose the required applications? We have languages that tell our computers exactly how to move, combine, and filter data, and I’m sure than alien coders would have something similar. But that doesn’t mean that we could just plow ahead and start writing code for alien software because we know that a programming language for it must exist. And if you write two applications that you want to talk to each other in any meaningful way, you better start writing adapters which parse through the data each application requires. How would we write an adapter for something literally alien to us? Does it use an object oriented approach or is it procedural code? Are we building a library of scripts or an array of applications that talk to each other, like servers and an interface? Will we need to completely redefine our definition of computing to write the code? It may be possible, but again, we’d need a manual that would tell us what to do from scratch. And even if we do succeed and end up building an evil alien machine that’s programmed to take over our world, it would be very easily brought offline by just pulling the plug. Forget the Skynet scenario here since our hypothetical villain is not going to be able to deal with TCP/IP standards and transmit viable data through the internet.
Finally, who’d want to build something that an alien species tells us to build just because it told us to? Who’d want to commit all the required billions to finance a project in which we wouldn’t know the end result up front? Plus, it’s going to be rather hard to hide the true intent of a device when you send step-by-step blueprints. The engineers and scientists working on it would have to extremely gullible not to notice when the plans become ominously vague, or tell them to do something that would run contrary to the device’s stated goal. If we figure out how to read alien languages and build parts and pieces of advanced alien technology, I think we would be smart enough not to create devices that don’t pass the smell test just because we’re being promised galactic internet connections of immortality. Again, the concept of a seemingly promising message from the stars to a hopeful human species turning sinister and turning our world into nothing more than a desolate mine for evil artificial intelligence programs spreading through the cosmos like a virus sounds like a great sci-fi novel, but the reality of such an entity actually being successful in more than a handful of places are slim to none.