[ weird things ] | when every myth you’re trying to debunk isn’t actually wrong…

when every myth you’re trying to debunk isn’t actually wrong…

George Dvorsky decided to try some myth-busting about the future of AI and the nature of the technology. The end result needs a lot of myth-busting of its own...
sleeping tablet

Is seems that George Dvorsky and I will never see eye to eye on AI matters. We couldn’t agree on some key things when we were on two episodes of Science For The People when it was still called Skeptically Speaking, and after his recent attempt at dispelling popular myths about what artificial intelligence is and how it may endanger us, I don’t see a reason to break with tradition. It’s not that Dvorsky is completely wrong in what he says, but like many pundits fascinated with bleeding edge technology, he ascribes abilities and a certain sentience to computers that they simply don’t have, and borrows from vague Singularitarianism which uses grandiose terms with seemingly no fixed definitions for what they mean. The result is a muddled list which has some valid points and does provide valuable information, but not for the reasons actually specified as some fundamental problems are waived off as is they don’t matter. It’s articles like this why I want to do an open source AI project, which I swear is being worked on in my spare time, although that’s been a bit hard to come by as I was navigating a professional roller coaster recently. But while the pace of my code review has slowed, I still have time to be a proper AI skeptic.

The very first problem with Dvorsky’s attempt at myth busting comes with his attempts to tackle very first “myth” that we won’t create AI with human-like intelligence. His argument? We made machines that can beat humans at certain games and which can trade stocks faster than us. If that’s all there is to human intelligence, that’s pretty deflating. We’ve succeeded in writing some apps and neural networks which we trained to be extremely good at a task which requires a lot of repetition and the strategies for which lay in very fixed domains where there are a few, really well defined correct answers, which is why we built computers in the first place. They automate repetitive tasks during which our attention and focus can drift and cause errors. So it’s not that surprising that we can build a search engine than can look up an answer faster than the typical human will remember it, or a computer that can play a board game by keeping track of enough probabilities with each move to beat a human champion. Make those machines do something a neural network in their software has not been trained to do and watch them fail. But a human is going to figure out the new task and train him or herself how to do it until it’s second nature.

For all the gung-ho quotes from equally enthusiastic laypeople with only tangential expertise in the subject matter, and the typical Singularitarian mantras that brains are just meat machines, throwing around the term “human-like intelligence” while scientists still struggle to define what it means to be intelligent in the first place, is not even an argument. It’s basically a typical techie’s rough day on the job, listening to clients debate about their big ideas, simply assuming that with enough elbow grease, what they want can be done without realizing that their requests are only loosely tethered to reality, they’re just regurgitating the promotional fluff they read on some tech blogs. And besides, none of the software Dvorsky so approvingly cites appeared ex nihlo; there were people who wrote it and tested it, so to say that software beat a person at a particular task isn’t even what happened. People wrote software to beat other people in certain tasks. All that’s happening with the AI part is that they used well understood math and data structures to avoid writing too much code and have the software itself guess its way to better performance. To just neglect the programmers like that is like praising a puck for getting into a net past a goalie while forgetting to mention that oh yeah, there was a team that lined up the shot and got it in.

Failing to get this fundamental part of where we are with AI, looking at fancy calculators and an advanced search engine, then imagining HAL 9000 and Skynet being the next logical steps for straightforward probabilistic algorithms, the rest of the myths are philosophical what-ifs instead of the definitive facts Dvorsky presents them to be. Can someone write a dangerous AI that we might have to fear or that may turn against us? Sure. But will it be so smart that we’ll be unable to shut it down is we have to as he claims? Probably not. Just like the next logical step for your first rocket to make it into orbit is not a fully functioning warp drive — which may or may not be feasible in the first place, and if it is, unlikely to be anything like shown in science fiction — an AI system today is on track to be a glorified calculator, search engine, and workflow supervisor. In terms of original and creative thought, it’s a tool to extend a human’s abilities by crunching the numbers on speculative ideas, but little else. There’s a reason why computer scientists are not writing countless philosophical treatises on artificial intelligence co-existing with lesser things of flesh and bone in droves, while pundits, futurists, and self-proclaimed AI experts churn out vast papers passionately debating the contents of vague PopSci Tech section articles after all…

# tech // artificial intelligence / computer science / futurism / skepticism


  Show Comments