[ weird things ] | the amazing intuitive, understanding a.i.?

the amazing intuitive, understanding a.i.?

As aspiring AI designer picks a very passionate fight with a way of thinking about AI that's been dead for three decades.
philosobot v2

Having gotten nowhere with Andrea Kuszewski on finding out what AI system could benefit from a therapist while writing my previous post, I did what every blogger does when stonewalled by invocations of secrecy and non-disclosure paperwork. Using a little basic Google-fu, I found a good description of what the company that hired her, Syntience, seems to be up to just in time for Kuszewski to post a follow-up linking to the very ideas of the company’s CEO, Monica Anderson, she presented as a trade secret the day before. Anderson’s thesis on AGI models even has its own site and a fairly sparse blog in which she details her conception of how an AGI should act. Now her ideas aren’t outright wrong in a conventional sense because lets face it, there’s a lot of philosophy in the AI world and in philosophy we find are few answers, only opposing viewpoints or similar ideas merging to form a general consensus on a particular subject matter. But there is plenty of room for lots of technical objections. For starters, here’s a summary of Anderson’s theory of what’s wrong with AI today…

There has been a mismatch between the properties of the problem domain and the properties of the attempted solutions. […] I have argued (like many others) that Intelligence is the ability to solve problems in their contexts. In contrast, programming deals with discarding contexts and breaking down complex problems into simple subproblems that can then be programmed as portable and reusable subroutines in the computer. Programming is the most Reductionist profession there is and has therefore mainly attracted the kinds of minds which favor Reductionist approaches. That means that one of the most Holistic phenomena in the world — Intelligence — was being attacked by the most Reductionist researchers in the world. This is the mismatch; the biggest problem with AI research in the 20th century was that it was done by programmers.

Ouch! Certainly there’s a ring of truth to this and as a programmer I am from the classic reductionist school of computer science in which everything can be described as a sum of discrete parts. Programming is all about taking a problem and breaking it down piece by piece because computers need to know how to use the data you’re going to feed them through a database connection or manual input. But the notion that we just strip out the context falls flat on its face because context is how we determine control flow. In fact, context is everything, especially in modern, modular software architecture. Let’s say I have a program that stores the basic person and company information needed to keep track of customers for a particular company. Before coding, I’ll have to ask a client how this data will be used and by whom. Why? Because if it’s supposed to feed into programs used by different parts of the business, I’ll probably implement it as a service with its design based on things like what framework the other programs use and how old these programs are to ensure interoperability. If this data us supposed to be viewed and edited by customers and used in a very limited scope, I might not have to build a service at all. Are there other processes triggered by a user editing certain information? Can I just call them asynchronously so the user can keep on working on other things? It all depends on… the context.

Now that we have this covered, let’s return to Anderson’s main point, which is that AI was taken over by logical reductionists who value mathematics over intuition and that any AGI has to be intuitive. Well that explains why Kuszewski used the word reductionist as much as she could towards me since I fit the profile. However, what has been labeled intuition by Anderson can easily be captured by a reductionist model despite her objections on the matter. You see, while we can look at AI as a collection of discrete parts, it’s not only the parts we need to focus on, but how these parts interact. If you ever look at many modern AI models, they look very sparse. An enterprise project prototype for a small company looks positively brobdingnagian by comparison because it’s trying to specify every bit of data, every field, every layer, and every basic action, resulting in a big model to hold all these functional requirements. When it comes to AI, it’s not the size of your model that matters, it’s how you use it, if you pardon the paraphrasing. The interest is on emergent behaviors of these neural networks, not on specifying the control flow. We want to see how much can be done through the interplay of discrete and rather simple bits and pieces, the same same kind of emergence Anderson insists can only happen within intuitive frameworks and is actually a constant fixture of the AI reductionism she laments in her writing.

Granted, there was a time when many AI researchers thought that all you need is to cram a machine with the information it will need to function and deterministically program an intellect. That time ended many decades ago when the first artificial neural networks were running through their experimental paces. Today, the ANN is a standard design pattern in AI tasks and the focus so far has been how to create the smallest network with a potential for a wide variety of emergent behaviors. The reasoning is that because intelligence evolved from an otherwise simple collection of specialized, interconnected cells, we should be able to break those cells down to their simplest abstractions and focus on the interplay between them. All of these are things that Anderson’s thesis advocates but they’re being done in the 20th century reductionist model she insists is wrong. It seems that she simply hasn’t kept up with the literature in the field and came up with an idea that’s already caught on years ago. But why she thinks that specifying a problem domain is a bad thing while alluding to evolution isn’t clear to me since evolution places pressure on organisms with an intellect to complete discrete tasks, so the idea has the biological basis she demands, and having a computer just step out into the world to do nothing in particular won’t work since it need to have at some goal to achieve.

# tech // artificial intelligence / computer science / reductionism / scientific research


  Show Comments