[ weird things ] | making computers think, one stat at a time

making computers think, one stat at a time

A new computer language wants computers to learn about the world the same way we do: using statistics.
a.i. mimicking brain

When humans think, we tend to rely on our experiences, associations and the facts available to us to come up with a decision. This process forms the basis of logical thought, considering the probabilities and going with decisions that have the most weight behind them according to the human making the decision. You can do a very similar thing with rudimentary AI prototypes through artificial neural networks as discussed in a previous post about the use of evolutionary algorithms in robotics. By assigning a certain importance to specific data sources, robots and complex software can learn how to go about certain tasks.

However, whey they can’t do is apply the same kind of model to abstract decision-making and understand things like exemptions, degrees of certainty, or change of what could be loosely referred to as their opinions. Until now. A new AI language looks to make autonomous systems much more flexible than their counterparts with hardcoded knowledge bases…

Developed by Noah Goodman, an MIT research scientist specializing in cognition and artificial intelligence, it’s a major update to how many AI prototypes see the world. The goal is to make a computer understand that not everything is black and white, true or false, and that different statements are based on varying degrees of facts subject to change in the future. Rather than being hardcoded to hold that the sky is blue, AI software using the approach laid out by Goodman’s team understands that the sky appears blue and its color will change during a normal day, or during a storm. The real changes and their degrees are predicted by a probabilistic model; if it’s raining heavily, the sky is most probably gray due to the rain clouds and if it’s sunny, the sky is likely blue.

It seems very obvious to us from observing and noting that fact since birth, but for computers to come to similar conclusions in a very similar way is a significant step. It also shows that we don’t necessarily have to go out of our way to replicate biological structures in vast simulations in hopes of finding a clue to how we could get machines to think. We only need to reproduce the patterns and give the machines some free reign to learn on their own. And it certainly seems to work according to an early report

If the e-mail patterns in the sample case formed a chain — Alice sent mail to Bob who sent mail to Carol, all the way to, say, Henry — human subjects were very likely to predict that the e-mail patterns in the test case would also form a chain. If the e-mail patterns in the sample case formed a loop — Alice sent mail to Bob who sent mail to Carol and so on, but Henry sent mail to Alice — the subjects predicted a loop in the test case, too. A program using probabilistic inferences, asked to perform the same task, behaved almost exactly like a human subject… But conventional cognitive models predicted totally random e-mail patterns in the test case: they were unable to extract the higher- level concepts of loops and chains.

At first glance, it might seem that the probabilistic models are just copying whatever they saw last, and yes, to some extent, they are. And that’s ok because what’s really going on behind the scenes is about as close as a machine can get to understanding what loops and chains are, and how to plot them. By contrast, a hardcoded model simply knows that e-mails in a company could be sent to anyone by anyone and goes right along with that idea to come up with wildly random e-mail patterns, failing to spot or consider how the e-mails are being sent within the company and the patterns those e-mails form.

When we get right down to it, this is how basic abstraction we use on a daily basis works and knowing how to make a computer do it allows us to task it with organizing and summarizing much more complex patterns then we usually could. The only problem with your work computers doing the same thing as the probabilistic AI models is the level of computing overhead it will take to run through all the required calculations. But as computers keep ramping up in processing speed, we could see predictive and learning components in enterprise software systems of the future.

# tech // artificial intelligence / computer models / computer science


  Show Comments