[ weird things ] | when two bad ideas meet: a singularitarian blockchain to stop evil artificial intelligence

when two bad ideas meet: a singularitarian blockchain to stop evil artificial intelligence

We need a blockchain for ethical, friendly artificial intelligence about as much as a fish needs an umbrella. But in true Singularitarian fashion, one is being proposed.
cyber demon

Artificial super-intelligence that decides to attack its creators is a common trope in both science fiction and among certain techies who preach the gospel of Singularitarianism, and it’s been a pretty frequent topic here as well. But no matter how many times those fretting about an evil AI going rogue are told that artificial intelligence won’t grow in a vacuum and can be understood, that Luddites messing with technology is a far greater threat to us than the technology itself over the long term, or that we don’t actually need to build an AI that replaces humans even if we were capable of doing it, the topic just seems to exciting and clickbaity to let it go. In this instance, investor Justin Chan decided to ask if blockchain, the technology behind cryptocurrencies, can be harnessed to stop a computer uprising.

He cites the concerns of Nick Bostrom, whose papers we’ve dissected, Shane Legg, who we fact checked back in the day, and Bill McKibben, who is a big fan of spreading doom and gloom even when it’s just flipping utopian science fiction into its dark, dystopian twin. I’m surprised he didn’t work in Henry Kissinger’s bizarre warnings about AI making human creativity obsolete to add yet another layer of dread and fanciful scenarios in which machines as either our masters or the dominant force on the planet, with us as an afterthought. After a selective gallop through an assortment of quotes about the future of AI and not so subtle pitches for AI-related services, he arrives at the notion that because nodes in blockchains are incentivized to find a consensus, we could get a blockchain for AI instances to agree that harming humans is bad.

But what if I were to have my AI not participate in that blockchain? Or program a large enough swarm to turn that blockchain’s consensus upside down and get the machines to agree that they should be attacking humans, not working with them? This potential solution seems to be yet one more example of people interested in cryptocurrencies injecting blockchains into everything. In a few years, I won’t be surprised to read an article telling us that the best way to look up a recipe for ramen online is by using a blockchain in which users can agree on what is and isn’t a good way to cook your noods. It will be unnecessary, taxing, useless overkill that won’t solve a real problem. Neither will trying to connect all robots to the Don’t Kill All Humans blockchain and hoping none of them will ever turn violent.

On top of all this, consider that a blockchain meant to prevent machines from harming humans might not even be wanted. Artificial intelligence created for military applications is coming, and it’s whole reason for existence will be to kill. This blockchain would make more sense in civilian applications, but even there, AI is almost infinitely more likely to harm you by accident, or as a side-effect of bad training or programming than actual malice. If anything, teaching would-be AI programmers that they should design their software and robots with a specific moral compass that could be flipped is likely to introduce this completely unnecessary vulnerability. This is why it’s a bad idea to put AI in black boxes and try to train them using operant conditioning. You’ll end up with imaginary problems and harebrained schemes to solve them, while pitching code no one wants because it’s poorly thought out and bizarrely trained.

# tech // artificial intelligence / blockchain / technological singularity


  Show Comments