why my dog is smarter than your a.i.

When thinking about AI, we often focus on human analogs. But what if we consider a non-human intelligence for a change?
seamus

When we moved to LA to pursue our non-entertainment related dreams, we decided that when you’re basically trying to live out your fantasies, you might as well try to fulfill all of them. So we soon found ourselves at a shelter, looking at a relatively small, grumpy wookie who wasn’t quite sure what to make of us. Over the next several days we got used to each other and he showed us that underneath the gruff exterior was a fun-loving pup who just wanted some affection and attention, along with belly rubs. Lots and lots of belly rubs. We gave him a scrub down, a trim at the groomers’, changed his name to Seamus because frankly, he looked like one, and took him home. Almost a year later, he’s very much a part of our family, and one of our absolute favorite things about him is how smart and affectionate he turned out to be. We don’t know what kind of a mix he is, but his parents must have been very intelligent breeds, and while I’m sure there are dogs smarter than him out there, he’s definitely no slouch when it comes to brainpower.

And living with a sapient non-human made me think quite a bit about artificial intelligence. Why would we consider something or someone intelligent? Well, because Seamus is clever, he has an actual personality instead of just reflexive reactions to food, water, and possibilities to mate, which sadly, is not an option for him anymore thanks to a little snip snip at the shelter. If I throw treats his way to lure him somewhere he doesn’t want to go and he’s seen this trick before, his reaction is just to look at me and take a step back. Not every treat will do either. If it’s not chewy and gamey, he wants nothing to do with it. He’s very careful with whom he’s friendly, and after a past as a stray, he’s always ready to show other dogs how tough he can be when they stare too long or won’t leave him alone. Finally, from the scientific standpoint, he can pass the mirror test and when he gets bored, he plays with his toys and raises a ruckus so we play with him too. By most measures, we would call him an intelligent entity and definitely treat him like one.

When people talk about biological intelligence being different from the artificial kind, they usually refer to something they can’t quite put their fingers on, which immediately gives Singularitarians room to dismiss their objections as “vitalism” and unnecessary to address. But that’s not right at all because that thing on which non-Singularitarians often can’t put their finger is personality, an intricate, messy process in response to the environment that involves more than meeting needs or following a routine. Seamus might want a treat, but he wants this kind of treat and he knows he will needs to shake or sit to be allowed to have it, and if he doesn’t get it, he will voice both his dismay and frustration, reactions to something he sees as unfair in the environment around him which he now wants to correct. And not all of his reactions are food related. He’s excited to see us after we’ve left him alone for a little while and he misses us when we’re gone. My laptop, on the other hand, couldn’t give less of a damn whether I’m home or not.

No problem, say Singularitarians, we’ll just give computers goals and motivations so they could come up with a personality and certain preferences! Hell, we can give them reactions you could confuse for emotions too! After all, if it walks like a duck and quacks like a duck, who cares if it’s a biological duck or a cybernetic one if you can’t tell the difference? And it’s true, you could just build a robotic copy of Seamus, including mimicking his personality, and say that you’ve built an artificial intelligence as smart as a clever dog. But why? What’s the point? How is this utilizing a piece of technology meant for complex calculations and logical flows for its purpose? Why go to all this trouble to recreate something we already have for machines that don’t need it? There’s nothing divinely special in biological intelligence, but to dismiss it as just another form of doing a set of computations you can just mimic with some code is reductionist to the point of absurdity, an exercise in behavioral mimicry for the sake of achieving… what exactly?

So many people all over the news seem so wrapped up in imagining AIs that have a humanoid personality and act the way we would, warning us about the need to align their morals, ethics, and value systems with ours, but how many of them ask why we would want to even try to build them? When we have problems that could be efficiently solved by computers, let’s program the right solutions or teach them the parameters of the problem so they can solve it in a way which yields valuable insights for us. But what problem do we solve trying to create something able to pass for human for a little while and then having to raise it so it won’t get mad at us and decide to nuke us into a real world version of Mad Max? Personally, I’m not the least bit worried about the AI boogeymen from the sci-fi world becoming real. I’m more worried about a curiosity which gets built for no other reason that to show it can be done being programmed to get offended or even violent because that’s how we can get, and turning a cold, logical machine into a wreck of unpredictable pseudo-emotions that could end up with its creators being maimed or killed.

# science // artificial intelligence / computer science / intelligence


  Show Comments