why shouldn’t robots write you a speeding ticket? they’re just not smart or lenient enough.
When four researchers decided to see what would happen when robots issue speeding tickets and the impact it might have on the justice system, they found out two seemingly obvious things about machines. First, robots make binary decisions so if you’re over the speed limit, you get no leeway or second chances. Second, robots are not smart enough to take into account all of the little nuances that a police officer notes when deciding whether to issue a ticket or not. And here lies the value of this study. Rather than trying to figure out how to get computers to write tickets and determine when to write them, something we already know how to do, the study showed that computers would generate significantly more tickets than human law enforcement, and that even the simplest human laws are too much for our machines to handle without many years of training and very complex artificial neural networks to understand what’s happening and why, because a seemingly simple and straightforward task turned out to be anything but simple.
Basically, here’s what the legal scholars involved say in example form. Imagine you’re speeding down an empty highway at night. You’re sober, alert, in control, and a cop sees you coming and knows you’re speeding. You notice her, hit the breaks, and slow down to an acceptable 5 to 10 miles per hour over the speed limit. Chances are that she’ll let you keep going because you are not being a menace to anyone and the sight of another car, especially a police car, is enough to relieve your mild case of lead foot. Try doing that on a crowded road during rush hour and you’ll more than likely be stopped, especially if you’re aggressively passing or riding bumpers. Robots will issue you a ticket either way because they don’t really track or understand your behavior or the danger you may pose to others while another human can make a value judgment. Yes, this means that the law isn’t being properly enforced 100% of the time, but that’s ok because it’s not as important to enforce as say, laws against robbery or assault. Those laws take priority.
Even though this study is clearly done with lawyers in mind, there is a lot for the comp sci crowd to dissect also, and it brings into focus the amazing complexity behind a seemingly mundane, if not outright boring activity and the challenge it poses to AI models. If there’s such a rich calculus of philosophical and social cues and decisions behind something like writing a speeding ticket, just imagine how incredibly more nuanced something like tracking potential terrorists half a world away becomes when we break it down on a machine level. We literally need to create a system with a personality, compassion, and discipline at the same time, in other words, a walking pile of stark contradictions, just like us. And then, we’d need to teach it to find the balance between the need to be objective and decisive, and compassionate and thoughtful, depending on the context of the situation in question. We, who do this our entire lives, have problems with that. How do we get robots to develop such self-contradictory complexity in the form of probabilistic code?
Consider this anecdote. Once upon a time, your truly and his wife were sitting in a coffee shop after a busy evening and talking about one thing or another. Suddenly, there was a tap on the glass window to my left, and I turned around to see a young, blonde girl with two friends in tow pressing her open palm against the glass. On her palm, she wrote in black marker “hi 5.” So of course I high-fived her through the glass much to her and her friends’ delight, and they skipped off down the street. Nothing about that encounter or our motivations makes logical sense to any machine whatsoever. Yet, I’m sure you can think of reasons why it took place and propose why the girl and her friends were out collecting high fives through glass windows, or why I decided to play along, and why others might not have. But this requires situational awareness on the scale we’re not exactly sure how to create, collecting so much information that it probably requires a small data center to process by recursive neural networks weighing hundreds of factors.
And that’s is why we are so far from AI as seen in sci-fi movies. We underestimate the complexity of the world around us because we had the benefit of evolving to deal with it. Computers had no such advantage and must start from scratch. If anything, they have a handicap because all the humans who are supposed to program them work at such high levels of cognitive abstraction, it takes them a very long time to even describe their process, much less elaborate each and every factor influencing it. After all, how would you explain how to disarm someone wielding a knife to someone who doesn’t even know what a punch is, much less how to throw one? How do you try to teach urban planning to someone who doesn’t understand what a car is and what it’s built to do? And just when we think we’ve found something nice and binary yet complex enough to have real world implications to teach our machines, like writing speeding tickets, we suddenly find out that there was a small galaxy of things we just took for granted in the back of our minds…