[ weird things ] | china is creating the blueprint for totalitarian techno-idiocracy

china is creating the blueprint for totalitarian techno-idiocracy

China is aiming for AI supremacy to automate and export totalitarianism. This effort is bound to backfire, but not for the reason you may think.
chinese stone dragon

Every cyberpunk dystopia features a technocratic government using state of the art software and technology to track everyone at all times. While in the West these novels and movies are meant as warnings, in China they’re apparently an inspiration as the government is busy with massive investments in artificial intelligence to exert maximum control over what it sees as its rightful territory. Even worse, its ultimate goal isn’t just an AI-powered Eye of Sauron, but an on-demand digital oppression kit it can sell to states in its geopolitical orbit, helping turn other nations into its equally despotic clones where freedom of speech is muzzled and human rights are seen as a punchline to a bureaucrat’s joke.

Of course, you could point to Western companies like Clearview, Cambridge Analytica, and Palantir, which use artificial intelligence for facial recognition, voter manipulation, and make decisions that impact people’s daily lives, and ask how they’re any different. They seem like key assets in the aspiring totalitarian’s toolbox created or backed by unsavory individuals with a lot of skeletons in their closets. But all three are just companies in search of customers to whom the ends justify the means while Chinese authorities are on an explicit mission to keep their subjects and critics under control, not simply abusing the potential of advanced tools it buys from questionable businesses.

In fact, weaponizing technology against citizens and trade partners alike seems like par for the course with China. Its social credit system tracks your every move and judges everything from your social media friends to purchases at the corner market to determine how trustworthy of a person you are, and automatically rewards you with little luxuries or punishes you by denying everything from train tickets or being able to move to a different city. Your social credit scores can even determine your position on dating apps and party cadres think nothing of asking tech companies to create backdoors for intelligence services to spy on foreign customers and steal private intellectual property from abroad.

the trouble with ai-driven oppression

But let’s push through the ethics of this abuse of power and technology for a moment and think about the problems with relying on machines to force social cohesion, impose censorship, and track down critics while identifying new threats. One of the dirty secrets of AI is that it requires vast workforces to train for real world use, and even then, the results are far from perfect. The kind of software touted by Palantir has been used to make wrong and racist predictions about criminal recidivism. Cambridge Analytica’s voter manipulation software cannot work the way they claim it does and has many disappointed customers. And facial recognition used by law enforcement had some spectacular failures.

Chinese tools aren’t immune to similar flaws, as citizens wrongly targeted by the automated wrath of its social credit system can attest. Its fearsome image is also a bit exaggerated and the implementation has been messy and incomplete, and left citizens afraid of widespread fraud. This is partially because it’s very difficult to build a system that can manage tracking 1.3 billion people in real time, and because creating its actual algorithms is an enormous challenge from logical and computing standpoints. With so many data points to consider, the neural networks involved would require hundreds, if not thousands of inputs, which means that the data it will spit out may easily end up with scattershot accuracy in the real world.

Instead of exporting a true Orwellian Big Brother™ suite of AI powered tools, it will end up with software that may be as accurate as a coin flip in identifying criminals or “enemies of the state” and would need to be retrained completely to understand local patterns of its new home and its targets’ behaviors before use. Imagine the classic cyberpunk novel in which the unblinking digital eye of the government doesn’t know what it’s seeing, makes wild guesses as much as a third of the time, and has to constantly reshuffle its data sources as those running it try to make it just a little smarter. At best, it will only show the scale of the problems would-be totalitarians face in controlling their nations.

when technocracy goes very, very wrong

Now, you might think that such troublesome and inaccurate systems would be great news for dissidents, journalists, and democracy activists in autocracies interested in working with China, but nothing could be further from the truth. In fact, their many gotchas are even more of a reason to worry because there’s a good chance they’ll identify innocent bystanders who want nothing to do with protests or whistleblowing and mark them for harassment by authorities. Dictators would have to spread vast dragnets and crack down even harder as their dashboards light up with would-be threats, and plan for ever more prisons and secret police to address all these possible threats as real dissidents slip through the cracks.

But China doesn’t seem to mind that its predictive tools cast a massive net and identify tens of thousands of citizens as threats for growing beards or renewing passports because it helps them justify imprisoning and ethnically cleansing the Uighurs in the Xinjiang province. You see, totalitarians don’t really care if you’re actually guilty of a crime or are doing anything particularly wrong. They just care that they finally have a plausible excuse to detain you and use your fate as a threat to others should they fail to conform, or just happen to be at the wrong place at the wrong time. Their message is “we own you,” and plenty of allied tinpot dictators would love to use Chinese AI to send the same message to their grumbling subjects.

The silver lining here, here may be that technology can be fought with technology. Streams of data meant to be analyzed by the neural nets can be disrupted by hackers. Poorly trained and overly ambitious systems can be easily gamed. Critics will always find ways to speak truthfully about oppressive regimes with encryption, VPNs, and good operational security. But the very fact that the technology intended to make our world a better place and enable amazing feats of engineering like deep space exploration, understanding the laws of physics, and more accurate medical diagnoses, is being used for such petty and small-minded goals is disturbing and worthy of condemnation.

Instead of harnessing the full potential of AI to do good deeds and advance humanity forward, China and other authoritarian artificial intelligence enthusiasts are trying to hijack one of the most useful tools in the world of computing for little more than muzzling critics or persecuting people they don’t like. Of all the billions that will be spent on researching better AI models and training techniques, how many will be wasted by governments and individuals who think that bigoted, totalitarian software will fulfill their reprobate fantasies? And how should the civilized world deal with them? China’s misguided ambitions clearly demonstrate that these questions will need answers sooner rather than later.

# tech // artificial intelligence / authoritarianism / china


  Show Comments