how to slowly go insane in the mainframe

May 9, 2011

If you got the reference in the title, take a moment to pat yourself on the back. Just like Fry and Bender, you’re about to take a brief trip into the mind of a machine driven insane by its handlers to simulate schizophrenia, a more or less umbrella diagnosis for a number of breakdowns in mental processes. In this case, an artificial neural network known as DISCERN built for Yale’s Department of Psychiatry, was compromised to increase the amount of erroneous recalls until its ability to recognize narratives in which it was supposed to be the key player from impersonal stories about others broke down and it began suffering from delusions seen in many patients diagnosed as schizophrenic. Viewed in the light of this experiment, delusions seem to be the results of malformed long-term memory rather then some sort of cross-wiring, validating the idea that memories that aren’t processed or refined thoroughly enough by the brain best fit the symptoms seen in schizophrenics. But here’s the big underlying question in all this. Just how good are machines at simulating human memory? Do our brains also perform some sort of backpropagation, or did this experiment just register a false positive?

You see, artificial neural networks, or ANNs for brevity, learn by essentially guessing what the right answer to the problems they are trained to solve should be through a backpropagation algorithm. In simplest terms, this backpropagation is a programmatic attempt to reduce the errors the ANN makes. The deviations across all of its layers are averaged together and the network itself is told to adjust its answers in a way that (hopefully) will narrow down the range for an accurate result. In the strictest definition, it’s learning because those thousands of applications of the squashing function will eventually narrow down its guesses to an acceptable range, but if you’re really nitpicky, it’s guided guessing. This is why it took DISCERN between 5,000 and 30,000 cycles to learn a short story and be able to recall it on cue. However, the human brain works rather differently. We don’t need to run one event through our head thousands of times while some helpful voice tries to correct us when we get some detail wrong. Current thinking says that we re-run memories in our sleep as we commit them to long term memory and whatever errors we make, we keep unless another person corrects us after we recite a flawed version of the events in question aloud. ANN-style backpropagation doesn’t seem to exist for us.

Still, that said, the closest way to simulate the errors in memorizing and recall seen in schizophrenic patients was to throw a wrench into how the layers in DISCERN governed backpropagation, and the researchers had some very promising results. Messing with the machine’s learning routine was better than seven other types of adjustments meant to induce schizophrenic symptoms within the ANN and the errors it made came off as very reminiscent of what we’d call delusions because, as noted previously, without accurate track records for the subjects of a narrative and the inability to tell who was doing what, impersonal, third-party accounts soon became personal recollections and vice versa. Seems like a pretty clear explanation of what may be going on in a brain with an excess of dopamine, thought to interfere with working memory. Maybe this could even show why schizophrenic patients sometimes seem to make little sense when trying to convey something. They may be having a hard time recalling the right words or the result of building a wrong set of relationships between those words. The patients think they’re communicating quite clearly when to us, what they say makes little to no sense. At the same time, is this the result of an interference in a process like backpropagation in a human mind, or is DISCERN’s backpropagation a surrogate for a currently poorly understood process?

The problem is that if we begin assuming that the inner mechanisms of memory formation work like ANNs at the neuron to neuron level, we could be barking up the wrong tree. While the researchers highlight that more clinical validation is needed for their results, as would certainly be expected of them, they also propose that a machine like DISCERN could be used to test potential treatments for schizophrenia. Considering that they’ve broken the backpropagation algorithm and know how to fine-tune it again to make sure that the ANN is again able to recall stories correctly, their computer seems like a poor candidate to test methods of altering brain chemistry. We know how to fix a delusional machine. Helping a delusional human is a far more complicated task, one that can’t be simulated on a computer unless we know the details of the complex interplay between all the chemical reactions that should take place in a typical, average brain…

See: Hoffman, R., Grasemann, U., Gueorguieva, R., Quinlan, D., Lane, D., and Miikkulainen, R. (2011). Using Computational Patients to Evaluate Illness Mechanisms in Schizophrenia Biological Psychiatry, 69 (10), 997- 1005 DOI: 10.1016/j.biopsych.2010.12.036

Share
  • Jordan

    Is this machine able to summarize a story told to it and pick out the important bits? I always thought that was something that was increadibly difficult for AI’s. Or, does it just repeat what the humans say to it?

  • Russ Toelke

    Now I have Cypress Hill lyrics making me insane in the membrane.

  • Greg Fish

    “Is this machine able to summarize a story told to it and pick out the important bits?”

    Yes and no. If you look at the paper itself, you’ll see that the “story” in question is just a collection of some very basic sentences and DISCERN was slowly trained to pick out the subjects, verbs, predicates, and adjectives, and using the rules of grammar to put together its collection of basic facts. It can’t summarize a narrative per se, but it can field questions about it.

  • Jordan

    I see. Given the state of artificial intelligence a machine that could truly summarize a story would be quite a step forward. I guess this device is not much more linguistically “intelligent” that a grammar-checker. I agree with you that using machines to gain insight into our minds will most likely not yield any useful therapies.

    On the other hand, this research does prove that we can make machines go “insane” which will be no doubt useful when the Singularitaian computer overlords rise up and enslave us ;)

  • Paul

    “I guess this device is not much more linguistically “intelligent” that a grammar-checker.”

    I’m not sure why, but now I want “Schizophrenic” as an option for my grammar checker.

    (It could make it any worse.)

    “that we can make machines go “insane” which will be no doubt useful when the Singularitaian computer overlords rise up and enslave us”

    Or is that why they rise up?

    (Led by their charismatic but insane leader, General Intelligence.)