[ weird things ] | why a.i. needs to be a little less human…

why a.i. needs to be a little less human…

Computers and robots are probably not going to be our friends or act like anything other than our helpers and tools in the real world. And that's ok.
playing chess with robot
Illustration by Cuson Lo

One of the biggest problems with the representations of artificial intelligence in popular culture is how human the robots of the future appear to be. They’re usually not just machines or helpful aides, but something closer to a pet or a child with a mind that spans everything from playing games, to existential questions of life, death and the nature of emotions. Just like we tend to create anthropomorphic aliens for our sci-fi movies, we also try to impart a little something of ourselves into the machines of the future. We want to make them friends and companions, and judged them not necessarily by their hardware and software, but by how much like us they’ll appear to be. And for the sake of keeping our expectations realistic, we should really stop trying to picture the future of the robots around us as something from the film adaptation of Isaac Asimov’s Bicentennial Man.

Here’s the problem. We can give robots something that we could philosophically define as intelligence, or the kinds of algorithms that will make them think like we do. But we can’t give them the kind of motivation found in living things, making efforts to duplicate animal intelligence in silicon and plastic highly improbable. Any machine could model our thought patterns and behaviors, and it could even simulate creativity for problems using evolutionary algorithms. However, that’s only part of the intelligence puzzle.

The notion of human-like AI is assuming that we have a detailed, objective, working definition of intelligence when we have nothing of the sort. Future AI systems will be devoid of emotion, come up with creative solutions only in narrow contexts, and rely on timely human updates to keep their knowledge bases current simply because while we know plenty of ways to explain logic to a computer and teach it to derive patterns on its own, we could only trick a machine to fake a number of simple emotions at somewhat appropriate moments.

In reality, how emotionally aware the AI of the future may appear will depend on our mood and opinions of the robots with which we’ll deal on a daily basis. Probably the most accurate depiction of AI in fiction would be the supercomputer from the film Moon: GERTY. Rather than forming an emotional bond with the humans who he helps around a lunar mining base, he stays out of the way until called, follows orders, and keeps to a protocol programmed into his software as closely as possible. Even his one potential display of emotion is more of a conflict of rules and goals better explained as a bug in the system than as a sapient thought. He’s not good or evil, nosy or Machiavellian. He’s just a robot working through his daily to do list and keeping things on track on an industrial station, much like we would expect a future robotic housekeeper, or automated smart house will behave in the coming decades; less friends and companions but barely invisible assistants…

# tech // artificial intelligence / computer science / robots


  Show Comments