why humans won’t become obsolete unless we choose to make them that way
We are not running out of jobs humans can do as computers take on more and more responsibilities. We're just choosing not to create new jobs and educate workers.
By this point, it takes living under a rock not to know that machines are on a steady march of taking away people’s jobs. Entire careers are headed for the dustbin of history once again. That in itself is nothing new. Innovation is the reason why you don’t see a lot of squires, or hear your local blacksmith slam your swords with large hammers at midday anymore, or get your news from the paperboy on the street corner, or get up to the banging of a knocker-up on your window when you need to get ready for work. But in all those cases, there was something else for the people to do, a factory in which to work, or an office in which modernized versions of their old job awaited. Automation today means no human is necessary to do the job at all and without a rapid reeducation, there’s nowhere for people to go. Or is there?
First and foremost, I would really like to put a persistent, yet annoying myth to rest right now. AI is not magical and will never be able to completely and totally replace all humans. Today’s impressive systems essentially use really straightforward probabilistic math to handle things the way we would, just a lot faster and with more attention to detail in a huge data set. Claims that in so many years, robots will be thinking for themselves and come up with new industries, or create some sort of cross-disciplinary super-science far beyond human comprehension usually come from people who have never dealt with actual AI code, or put together probabilistic systems. We’re just dealing with math, not some eldritch creature that has intrinsic motivations, and despite the attention-grabbing headlines, humans are very much in control of it.
It’s also absolutely groan-inducing when people worried about automation tell me that programmers are making themselves obsolete because AI would be able to write its own code to solve common problems. Maybe after 30 or so years of trying, we could finally get somewhat decent machine generated code. That’s great, we can just focus on more important things. The idea of programmers now being unnecessary is like saying that because we can now use calculus to solve complex problems, we no longer need mathematicians, or that because we can now sequence DNA faster than ever and perform the kinds of deep dives into datasets that used to take years in hours, we don’t need any more geneticists. Machines, for all their prowess, are just our tools, and they still need someone to know how to use them properly.
But wait, cry the Singularity adherents and those influenced by them, what if one day we can make a super-AI? And what if one day I conquer the Moon and call myself the first lunar emperor? And what if one day, I was abducted by aliens? And what if one day we discovered how to reverse gravity? There is no scientific law that the ability to ask a what if question with some really debatable, sketchy ideas as to what an answer might look like, means that the question is realistic, much less would ever become a real situation with which we’d have to deal. Doubly so when it comes to the favorite answer of many transhumanists to the question of how they plan to build an AI able to push humanity into obsolescence: “a lot of smart people are working on it.” A lot of smart people also built the Titanic and the Ford Pinto. Maybe you’re mistaken to outsource evidence for your assertions to unnamed geniuses if that’s your comeback to a question about concrete implementations.
Which brings us back to the question of what we should do with humans in the unenviable position of having their job made obsolete by code. We know there are other things we could be doing, but because our leaders are stuck in the past, often to the detriment of those they lead, and using outrage to their inaction to create scapegoats in a culture war because they can’t think of a solution, we’ve basically sent millions of people to wait until we have an actual job for them. This is a sad state of affairs to put it mildly, so sad that it has become one of the centerpieces for arguments in favor of the oft floated utopian idea of a Universal Basic Income. Because AI is destroying capitalism as we know it, we should do something to help the casualties of this creative destruction to buy some time until we figure out some long term solution so we can pay for those permanently obsolesced not to starve to death.
Now, it’s hard to argue with the idea that we shouldn’t help people who’ve been left by the wayside by a massive paradigm shift to the post-industrial economy. And I actually agree that some form of UBI could help people out of poverty better than micromanaged, punishing assistance programs built to knock the feet form under those just starting to climb out of poverty on a slow and steady track so they don’t get too much help in the interest of some bizarre idea of fairness that views starvation and lack of proper healthcare as a proper punishment for both laziness and misfortune. However, simply giving people enough money to survive and sighing deeply is a disservice to everyone involved in the long term. Having machines do repetitive jobs, or jobs that couldn’t provide a good standard of living anyway just leaves more people to be trained how to do the jobs of the future.
We can invest more in science and engineering, use our extra time and extra tax revenue from streamlining the economy to research new medicine, study nature, send more humans into space and explore the solar system with ever more complex robots. We can shift to a knowledge driven economy instead of a consumption driven one, where the end goal is accumulating knowledge for the sake of turning it into something useful. Technology and know-how used in particle accelerators has been used to treat cancer with a very high degree of precision, for example. We need an economic model emphasizing that kind of cross-disciplinary, curiosity-driven science, not today’s myopic, hyper-specialized accounting-driven paper pushing. But we refuse to change things because the change-averse and the STEM illiterate are often in charge and they simply cannot understand how to do things differently. And there’s only one thing we can do about that: refuse to vote for them.
Any time you hear candidates for office say that they will create new jobs in whatever capacity they’re trying to fill, if there’s no mention of automation or improving access to education and job training, that candidate is full of crap. They need to either be educated that this is the only way to prevent having permanent out-of-work demographics without engaging in a race to the bottom on both wages and environmental regulation with nations where slavery is still kind of a thing and human rights aren’t, or lose to a candidate able to understand this. We won’t be able to stop the march of automation, it’s simply too profitable to halt. But instead of worrying about the coming of a future AI-driven dystopia, we need to be worrying about learning how we can harness automation to our benefit in areas other than mass produced consumables because machines have that covered. We need to move on.
And should anyone waxing nostalgic tell you that all we need to succeed in a mostly automated globalized world is to “work hard and pull yourself up by your bootstraps” should be directed to the nearest museum as an exhibit and given some bootstraps to satisfy their footwear accessory fetish. We’ve been working hard for centuries and in the process, invented something that will always work ten times harder and a thousand times more precisely because it lacks the limits of short term memory, physical exhaustion, job anxiety, or cognitive overload when dealing with a torrent of new data, and was built to counter those exact shortcomings of a human worker. More hard work is not the answer. Now it’s time for us to work smart instead.