Maybe it’s the pessimistic Eastern European in me, but the science fiction universe I find to be most reflective of reality is Warhammer 40,000. Set in the far future, it centers around humans trying to rebuild their civilization after a 10,000-year cosmic dark age followed by a brief revival and reunification, which was rudely interrupted by a brutal civil war. A society meant to focus on the restoration of its past and a secular future free of religious cults becomes the Imperium of Man worshipping the artificially living corpse of its emperor and praying to machines and technologies it no longer understands.
One of the most important factions in the Imperium are the tinkerers and makers of the violent and dark far future, the tech priests of Adeptus Mechanicus based on Mars, who quest to find a lost holy grail of human knowledge, an intact Standard Template Construct, or STC. In this odd computer lie the blueprints for every technology humanity and its artificially intelligent helpers invented during their golden age and possessing one could very well help restore parity with its glorious past. It doesn’t really matter what STC they would find and where because these devices would talk to each other across the vast distances of space, keeping every outpost, no matter how distant, at the bleeding edge of technical and scientific achievement.
This idea of a decentralized fount of knowledge able to build and direct countless machines is far from new in science fiction. If anything, machines like this are quite literally the only way that space exploration could work due to the communication delays imposed by the speed of light and the need for every outpost to have access to the latest software and blueprints for all the machines it fields. Firmware updates, bug fixes, and improvements would simply take too long any other way and put astronauts on different worlds and in distant solar systems on very different operating system versions and tech stacks, creating a mess for those tasked with their upkeep. And like the weirdo I am, I’ve been fascinated with how to build systems like it.
When writing Shadow Nation — which is in no way based on Warhammer 40K but tried to take its cues from Isaac Asimov’s Foundation and the Lovercraftian ancient alien mythos — the engineer in me kept asking how exactly a robotic army one expected to work in sync across the galaxy could be built and maintained to plausibly describe it and gain insights into what it would need to look like and how it should behave. In my research, I came across many iterations of the STC and very real proposals by generations of engineers, as well as the papers in which they tried to predict how their ideas could backfire.
After a while, I settled on three basic principles for the design of my futuristic mechanical swarm. First, the robots would have to be generalists built for the long haul to standardize manufacture and create the necessary economies of scale to keep pumping out the tens of millions of them necessary to control an entire galactic arm. Secondly, the robots had to learn and decide what to do independently as much as possible because they may be hundreds of light years away from the nearest operator. Third, and finally, the machines had to talk to each other and update one another across the vast expanses of their territory and beyond as they explored, defended, and patrolled to make sure that one species of intruder couldn’t glide past a thousand year old model unscathed while another faced off against a newly updated and upgraded drone.
Now, years after writing about it and looking for artificial intelligence libraries to address very real problems and questions at my day job, I started thinking whether some of the technologies I imagined were really as far away and complicated as assumed. So, I decided to do what I thought was the ostensibly logical thing and see how much of it I could build with tools that exist today. Obviously, the bots themselves would have to be simulated, but the common logic that would manage them and their ability to “watch and learn” and make decisions would have to be the same for the same reasons you’d want them to be able to network with each other and be easily built in large numbers.
Can you imagine having to maintain hundreds of versions of operating systems and millions of scripts with millions of lines of code each, unable to sunset and phase out any of them because the missions will take decades, if not centuries, and your oldest probes and tools will reach their destinations first? Every programmer and dev-ops professional reading that question is either sweating and feeling mildly nauseous, or quietly sobbing as they imagine patches, code fixes, and spotty documentation accumulating into an unstable mountain. No, we’ll need a radically different approach to make any of this work for any real length of time.
With all that in mind, I wrote some code to see what problems I would encounter along the way and how well it could all work together. As it turned out, aside from quantum computers to train the bots several orders of magnitude faster, nothing required code or devices that today’s computers couldn’t handle, or abilities that were that far out of reach for today’s tools. I began messing around with some AI concepts and cross-platform code to create a diagnostic tool for cloud deployments. I ended up with a prototype meant to let machines watch, learn, improve, and share their best neural networks with each other like a decentralized hive mind.
While today it’s almost a cliché to hear about yet another AI framework or startup claiming that its goal is to make machines think more like humans, the reality is that it might not really be possible. Virtual minds will have to be very different from our own because they’ll be built differently and their ability to reason will have to be based on training themselves to do a lot of things millions of times over, creating plenty of opportunities for emergent behaviors and side effects. In short, they’ll come from a different place and evolve differently from us, which, to me at least, doesn’t seem like a problem but a logical consequence.
There’s no inherent reason our tools should mimic us too closely, and we can come up with plenty of arguments about why this may not be wise in the first place. But there is one area in which machines should really take a cue from living things and learn how to interact with the environment around them. A school of AI called embodiment theory argues that because intelligence evolved to help organisms deal with the challenges of navigating the world and survive long enough to reproduce, the foundation of any intellect has to be built on the ability to understand what happens outside of oneself, not merely the ability to attack abstract ideas with matrix multiplication and calculus.
So, with that in mind, imagine yourself deploying a vast swarm of machines designed to work on their own for very long periods of time exploring, observing, and defending themselves and your outposts and assets across a vast territory measured in hundreds, if not thousands of light years. Their sensors will be constantly detecting a tsunami of inputs. A great deal of it will just be the universe’s natural background noise you’ll want to automatically filter out. The rest may be interesting but inconsequential. Some of it may prove to be keys to profound discoveries or answer mysteries about why you lost touch with one or more of those machines.
This data will likely be distributed through another peer-to-peer system where mundane bits are automatically archived, then discarded on a schedule, and neural networks trained to spot truly unusual patterns will flag the relevant input data and forward it to you and others like you so you can be alerted to something important or help the machine figure out what to do. And when you intervene to help, the relevant snippet of code under the hood to do so will look something like this from a very high level…
Now, there’s a lot going on in what are relatively few lines of code. First, you grab the metadata for the type of machine you’re trying to teach. Note that you’re not grabbing the specific robot but a template for one, which is a very important difference. Then, you can point to the input data on which you want it to train using the injected function, specify how many times you’d want this template to analyze this data, and provide an output schedule in which you specify what motors will do what in response to the stimulus, or what tasks or routines you want the robot to activate.
The handler you called will now check that the template can train on what you want and how you want and start parallel recording for the number of iterations times the refresh rate of the slowest sensor. Another handler checks whether this machine configuration can handle the neural network you’re trying to build based on your input and sets up the actual network with some hints from you or an algorithm on its design, which is then passed on for the actual training. Ran from start to finish, with every step logged for inspection, the routine will look similar to the output below…
If we combine this log with the high-level excerpt, we see three important details. First, there’s the flexibility to have the training iterations and settings specified by both humans and code, so at some point, more and more of the training process can be automated. Secondly, there is a track record showing how accurate the neural network is and because each network will be ultimately assigned to tasks with this value, if one of your counterparts manages to create a better training set and more accurate networks, the robot can upgrade itself to use the better logic. And finally, because you trained a template, not a specific machine, your logical pattern can be applied to every robot with the same array of inputs and outputs.
When the training is done and the persisted data objects are synchronized across your swarm, you’ve effectively told every relevant machine how to deal with a new situation or discovery on its own. Likewise, you can even use the same approach to teach other machines to simply call a robot trained how to properly handle those inputs or determine which machine around them could be trained to deal with it based on its configuration, making teamwork a part of their logic. Do you need to know what’s responsible for the odd readings from a cave but lack the right sensors? Just call the nearest robot which does.
Given enough examples and situations, this basic logic can recursively build on itself to create a vast database of related neural networks which are constantly being refined and expanded by their shared connection. (Although some specialists could be kept by granularly selecting what templates and networks can be synced.) And while a number of those routines could certainly be programmed with deterministic if-a-then-b-if-no-then-c code, this code may not account for variations in real world tolerances or lack the logic to handle weird and unexpected inputs and malfunctions.
Worse yet, deterministic, step by step code might have to be rewritten from scratch if a robot’s configuration has to change for whatever reason. The positives of probabilistic neural networks will simply outweigh the negatives when dealing with the uncertainty of the real world. That said, with the need for standardized behavior in mind, these neural networks are attached to tasks, which can in turn be packaged into routines, running either a sequence of steps encoded by neural networks, or multiple behaviors in parallel. It all comes down to what you need from your machine, and you have the flexibility to test how well your template works before sharing its abilities with others like it.
So that’s our little STC in a nutshell, a horizontally scalable service capable of teaching machines how to do pretty much whatever you want, minus a quantum circuit we’d want at some point in the far future but easily designed to allow us to simply pop it in and update itself. Its only real challenge is needing a library of adapters that can talk to as many machines as possible in a way they will understand, but in the near future in which machines are designed around a small set of common protocols and open APIs so they can talk to each other, this doesn’t seem all that far-fetched.
If you throw in some sort of impetus to keep continually training and upgrading your robots and their logic across multiple nodes, and some version of this system becomes not just a nicety but downright mandatory, like factories that need to be re-tasked within months instead of years or space probes sent out on organized, systematic missions to map and study other worlds getting the situational guidance they need from mission control and passing it on to their counterparts. And of course, the system itself could be updated with more efficient logic, optimizations, and new training approaches as time goes on.
Ultimately, it could even teach robots to build other robots, which in turn could build even more robots, and provide a blueprint for automation of certain jobs and tasks which could be done more efficiently by machinery rather than humans, and free the humans to think of new ways they could make the machines work for them. Unfortunately, in our current political and social system, this would result in millions of people permanently out of work and locked out of new opportunities by the high costs of education and lack of resources to move where there’s still demand for them, pushing them into the gig economy or an endless cycle of debates about the viability of a universal basic income.
The fact that the technology to do all this exists today and can be pieced together from various machine learning libraries and cloud servers by somebody in their spare time on a lark should be a red flag that just like we weren’t ready for the first wave of mass automation that began in the 1980s, we’re even less ready for the next one about to brutally kick our asses. If you think the populism and wealth inequality we see today is bad, it will look like the good old days if we don’t make a concerted effort to adapt to the future and rid ourselves of politicians whose only solutions to this looming crisis is to ask their voters “have you tried hating all the foreigners and brown people more?” and starting the same kind of self-destructive protectionist trade wars that once brought the global economy to its knees and fueled the rise of fascism.
But the same technology that threatens our jobs and old ways can also help propel us toward prosperity if we just make an effort to understand what it is, how it works, and how we can use it for good rather than war and economic pillaging. Will the first STCs and the AIs they power ever produce a utopia? Probably not. But will we be better off if we embrace and harness what they promise in the end? Absolutely. We can’t pretend that the social and political changes we need to make are 50 to 100 years away and relegate them to future generations. These tools are here today, and their development is only accelerating.
We ignored its first prototypes at our own peril and got countless closed factories and mines, and millions of layoffs and forced early retirements. Do we really want to make the same exact mistake twice, this time with exponentially worse results and much, much higher stakes and no strategy for handling its fallout besides self-destructive populist pandering?