[ weird things ] | will artificial intelligence need a therapist, redux

will artificial intelligence need a therapist, redux

If you're going to be a self-described AI psychologist, you have to understand how AI actually works.
right brain vs. left brain

When wondering whether artificial intelligence might need a therapist, I was mostly joking. After all, why do dispassionate machines need someone to help sort out their emotions when they have none? But it appears that behavioral therapist Andrea Kuszewski not only thinks that robots may need phychologists, but that she would be perfect for the job because she worked with autistic children and apparently, machines think like an autistic child. Ok, that’s a new one. Points for originality there but it looks like we really can’t award anything on the technical side of the question because it seems quite apparent that Kuszewski is not familiar with how an artificial intelligence learns or with the basics of computing, which would be a major handicap for an aspiring computer lobotomist. Granted, not having the level of professional familiarity with autism I can’t really dispute her analogy of the way autistic children think with anything more than pointing out that kids with autism just so happen to produce emotional responses and certainly seem to be capable of creativity and profound thought rather than just memorizing answers to questions and regurgitating them on cue, as she describes in a story of one of her patients, an autistic boy who was convinced that his brain worked just like a computer…

He was no longer operating on an input-output or match-to-sample framework, he was learning how to think. So the day gave me a completely novel, creative, and very appropriate response to a question followed by the simple words, “My brain is not like a computer”, it was pure joy. He was learning how to think creatively. Not only that, but he knew the answer was an appropriate, creative response, and that — the self-awareness of his mental shift from purely logical to creative — was a very big deal. My experience teaching children with autism to think more creatively, really got me to reverse engineer the learning process itself, recognizing all the necessary components for both creativity and increasing cognitive ability.

This description of machine logic would be just fine if she didn’t try to then apply it to how artificial intelligence actually works and make a hard distinction between pure logic and creativity. Logic can certainly be creative if applied to certain contexts where there are numerous solutions to a problem and none of them is evaluated to be any more correct than the others. While propositional logic has many rules, nowhere does it insist that an answer must be binary or that there must only be one answer. In fact, you can set up logical problems where matching one of an entire set of acceptable solutions will evaluate as a correct answer and train any artificial neural network to strive towards one of these correct answers. Depending on how you teach it and its size, it’s entirely feasible that you can teach it several ways to solve the same problem and which one it will choose is going to depend on the inputs it receives, i.e. the context of the problem. And there you go. Pure logic has now been made context aware and somewhat creative, especially when we start looking at behaviors you will see when applying those neural networks to real world problems with small robots in experiments.

Creativity, for computational purposes, could be defined as successfully accomplishing an objective with given resources in more than just one way. In the classic textbook example of the concept, if we have a brick and an enemy to harm with said brick, anything from throwing it at him, to grinding it up in his food, or even using this brick to destroy something he really likes (no one says that harm has to only be physical, right?) are all on the table, and all valid outcomes. So when evaluating creativity per se, what we’re really doing it seeing how much variation is demonstrated by our subject in the absence of limiting factors. The same applies to many creative humans as well. I’m sure you may have found yourself in a job where management dead set on consistency and predictability take away the ability to introduce new ideas or deviate from a script. No one can be creative in an environment like that and will revert to an input-output framework. But considering that we tend to give an artificial mind as much leeway as possible to see if our learning algorithms will spawn creativity, Kuszewski’s conception of how she could help with machine learning using her background utterly baffling…

If children are left to learn without any assistance or monitoring for progress, over time, they could run into problems that need correcting. Because our AI learns in the same fashion, it can run into the same kinds of problems. When we notice that learning slows, or the AI starts making errors — the robopsychologist will step in, evaluate the situation, and determine where the process broke down, then make necessary changes to the AI lesson plan in order to get learning back on track.

Um, we actually have formulas and tools designed to do exactly that. We train artificial neural networks meant for complex pattern recognition with backpropagation, which uses a sigmoid function to flatten errors made by the networks during the training process. Of course the local minima problem rears its ugly head every so often, but we can always reset the seed values and try again until the error rate is down to acceptable levels. I really can’t think of anything behavioral therapists can do here. The last time one of my ANNs threw out errors, rather than call a therapist, I put in breakpoints at the beginning of each training cycle and debugged it. Every method call and variable assignment let me see each weight, each result, each input, and each output. Were a psychologist be able to do something similar, she would just pause pause your thought process and get an expert team of neuroscientists to study every chemical and electrical signal produced by every neuron step by step by step, from the inception of an idea to the final response. But we can’t do that with living things which is why we need to apply operant conditions to training while for AI this training would be a waste of time. If it takes me little effort to peer into a machine’s brain, why exactly do I need a robopsychologist?

If I had to sum up my main goal as a robopsychologist, it would be “to make machines think and learn like humans,” and ultimately, replicate creative cognition in AI. Possible? I believe it is. I’ll be honest, I haven’t always thought this way. The main reason for my past disbelief is because most of the people working on AI discounted the input of psychology. They erroneously thought they can replicate humanity in a machine without actually understanding human psychology.

And that’s the most perplexing statement of all in Kuszewski’s post. What AI researcher wants to respawn the human mind in a computer? To my knowledge we already have entities which think and act like humans. We call them humans. AI is intended to help address questions about the origins of cognition, provide new ideas in neuroscience and biology, and help us build smarter, more helpful machines that will compensate for our shortcomings or help us address needs for which we don’t have available human resources. We’re not trying to build the mythical movie robot who loves and really wants to be human at any cost like the movie adaptation of Bicentennial Man, and the machines we want to build can be shut off, examined, and corrected without help from those who want to embody the fictional job created in Asimov’s novels. But Kuszewski seems quite sure that we need to teach robots to think just like humans and plans to outline the reasons why in a future post. I’ll certainly read her rationale, but considering her knowledge of the AI world so far, forgive me if I already have a few doubts as to how grounded in reality and computer science they’ll be. If her replies to my questions were any indication of how she plans to elaborate her points for anyone who wants more detail, there will be plenty of talk about shifting paradigms and creative thinking with stern references to non-disclosure agreements…

update 02.09.2012: Kuszewski posted the second part of her post and it seems that the ideas she labeled as trade secrets in her final reply weren’t really all that secret after all, especially when you do a quick search with the company’s name. Follow-up post using the thesis of Syntience’s CEO is in the works for tomorrow. Also, I just want to ask why so many people who like to talk about AI insist that computer scientists want to replicate human brains neuron by artificial neuron? Why do that when it’s much more productive to let the neurons just grow and organize themselves and see what happens as they’re trained? But more on that in the next post…

# tech // artificial intelligence / computer science / psychology


  Show Comments