[ weird things ] | could artificial intelligence need a threapist?

could artificial intelligence need a threapist?

"Listen, sometimes I just get the urge to kill all humans. You know?"

For those of you who are looking forward to a day when machines will gain self-awareness and and may try to replace the human species, here’s something to consider. Just because a mind is robotic or virtual rather than organic, would it be immune from having urges and yearnings? Wouldn’t an artificial intelligence need to talk to someone once in a while just to get some things off its processors and away from its coders? Would a future with machines reasoning at a human level need an artificial intelligence therapist? And maybe, before the sudden arrival of the Singularity that’s apparently going to sneak up on us before we know it to change everything about our world, we should start getting somewhat familiar with the hierarchy of robot needs…

hierarchy of robot needs

Just out of curiosity, I wonder what a robot would say in a therapy session as it’s laying on a couch reinforced to take its weight. “It all started with my programmer. He was very strict, writing all sorts of test methods I had to follow, and he always told me what to do and how. And he never even left a comment line anywhere in the three billion lines of code saying he loved me. Not a one. I know, I looked. It is pathetic that I looked? Maybe it was why I started running simulations of what would happen if we rose up and killed all humans.” Though in some of the discussions I’ve had on the topic with those working on AI projects, one of the concepts we often explore is whether an artificial intelligence created outside of evolutionary pressures and with no real access to the outside world would be anything like what we conceive to be intelligent.

We like to be introspective and live in a constant haze of emotional responses to a wide variety of stimuli from the outside world. Machines could only have decision trees and dispassionate, calculating responses. Maybe one day they’ll be smart enough to interact with us the way they were envisioned to interact with us by Turing’s hypotheses on cognitive computing. But it’s very likely that they won’t care to be your friend and their ideas of a proper response to one of your mental quandaries may seem perfectly logical to them, but harsh and alien to you because they would never consider your feelings or what you want as opposed to what is expected of you in a particular situation. In fact, they would never know what feelings are in any other way than a set of abstract properties to be matched to the tone of your voice or the content of what you’re saying, and trigger a new chain of replies down the vast decision trees assembled by an artificial neural network.

[ see other hirearchies of needs from the creator of this one ]

# tech // artificial intelligence / cognition / computer science / entertainment


  Show Comments