[ weird things ] | why regulating a.i. should have nothing to do with a.i. itself

why regulating a.i. should have nothing to do with a.i. itself

When we talk about regulating artificial intelligence, we need to step back and ask ourselves a very important question. What are we regulating and why?
metroid evil brain

Despite what you may have been told by the Trump administration, artificial intelligence is busy automating jobs and being applied to solve problems ranging from classifying business listings and making sense of millions of documents at a time, to curating news feeds and de-radicalizing frustrated young men with sex bots. And as AI becomes more and more powerful thanks to the sheer number of servers and the vastness of datasets we can throw at it during training, people are wondering if it’s time for governments to step in and start regulating how it’s used and why. If you can reshape geopolitics with relatively few lines of code, maybe there needs to be some sort of an authority that makes sure you’re doing it right and those who’ll be affected have plenty of warning to figure out how they’ll need to adapt.

But according to a Canada-based Medium publication called Towards Data Science, this would be a huge mistake for reasons outlined in a currently paywalled article, which seem to miss the point entirely. Basically, one of their writers argues that we can’t regulate science, lawmakers don’t know what they would be regulating, and we’re in a global competition to develop new and exciting AI technologies, so pumping the brakes in the middle of a contest would make no sense and only allow our competitors to pull ahead. Not everything here is wrong, and there are some very pertinent points, which we’ll definitely touch. However, let’s start with the biggest problem here, the idea that we can’t regulate science even if we try.

we already regulate science, a lot

While you seldom hear about legal restrictions when it comes to many of the papers covered here or on other popular science sites and publications which cover scientific studies, there are some very heavily regulated fields of research. Research into nuclear energy requires scientists and engineers to follow certain procedures to ensure safety, and medical scientists are heavily restricted in how they can use patient data, what substances they can test and how, what kind of consent they need from subjects, how it can be obtained, and many experiments require an ethics board to oversee safety and compliance with the appropriate laws.

But according to the article, not everyone follows these rules, as the world saw with He Jiankui’s gene-editing experiment with live humans, so we may as well quit trying to restrict scientific creativity. However, this logic is flawed. Using the same reasoning we can say that just because people drink and drive despite laws against drunk driving, we should just let people get sozzled after a rough day at the office and let Darwin take the wheel. Knowing that doing certain things will get you in hot water with authorities is enough to stop many potential abuses which could have real, lifelong consequences for their test subjects. In fact, this kind of cruel indifference to study participants was what prompted these regulations in the first place.

For scientists like He, the regulations provide swift and well understood public punishment which will deter other researchers who want to flaunt the laws. Yes, he did a forbidden and unethical experiment despite there being laws against it, but let’s not ignore the fact that his career as a scientist is now over, there was a massive uproar about his actions across the global scientific community, and many ethics boards and universities have likely come down on their own mad scientists. He crossed a line, is now paying a price for it.

do politicians know what they’re regulating with a.i.?

But while making sure that research isn’t hurting people or producing something dangerous without proper safeguards is extremely important, what exactly would we be trying to prevent when regulating AI? An obvious scenario would be an AI with the ability to decide what people should be killed and act on it outside of military control. We might also want to prevent AI being placed in charge of decisions where humans should weigh in, like sentencing criminals, a task at which AI was used to do nothing more than mask historical biases, setting it up for failure.

Beyond that, however, what else would be a desired outcome of AI regulation? This is where the article in question points out a very real issue in how the technology could be regulated. Few people really understand what AI is and how it works because it’s a very specific skill set, even among programmers, and that knowledge gap is exponentially worse for today’s lawmakers, who regularly embarrass themselves when dealing with tech companies. They will be trying to pass laws that accomplish vague and questionable goals through laws that might have effects they don’t grasp, to regulate something many of them them don’t really understand.

That said, we should absolutely pay attention to how AI is being used and bring up problems we notice along the way because we will ultimately need regulations of some sort. Right now, what lawmakers need to be targeting is the private data of individuals and how it could and couldn’t be used by tech companies. This alone could prevent numerous future abuses by robbing an AI of a training data set on which it would learn to make debilitating decisions and stop us from a future where we try to put genies in bottles instead of preventing them from getting out, and it’s a far more relatable and easily understood place to start.

competition is no excuse for a lack of necessary regulation

And this is where Towards Data Science dissents, saying that since we’re in a global AI race, we can’t be hobbling ourselves with regulation while someone else goes down a legally forbidden avenue to reap the reward. But this attitude seems to betray a lack of understand what artificial intelligence really does by the author. Not all AI tools are the same and having more AI tools is not guaranteed to be beneficial, especially if those tools are made for questionable reasons, with bad data, and used for dubious things. Unlike the space race, which was about building a bigger, better, more capable set of rockets, the competition in artificial intelligence is about novel ways to reduce our cognitive workload on mundane tasks we can hand off to machines while we go on to apply our creativity elsewhere.

If China wants to apply AI to enhance its police state and give its minders more time acting on an automated tip while Western nations ban artificially intelligent stalking of citizens, it wouldn’t be surrendering a capability to a competitor. It would be taking a principled stand against a way to abuse this technology to make people’s lives worse. Treating every application of AI the same way is like following your business rivals into dealing with organized crime because they benefit from it today and you might as well too, ignoring the fact that it’s illegal, immoral, and will have consequences if you’re caught or your new criminal partners turn against you because they got bribed to do so, or you did something that turned you into a liability.

so, what would a.i. regulations look like?

Despite arguing that we need regulation when it comes to AI in the real word, notice that when regulation is mentioned, it’s not focused on the technology itself, but its uses. That’s because we couldn’t regulate the technology itself, especially in this case because it would be trying to pass laws on which math is or isn’t acceptable. Saying that we need to regulate AI itself is like saying that we need to regulate hammers and decide how they should be shaped, or how much they can weigh, which would be a huge waste of everyone’s time and lead to bizarre laws written by people who don’t understand how hammers are built or have some very serious misconceptions of what hammers are and how they’re used in the real world.

Imagine banning a sigmoid activation function, or limiting the hidden layers in a larger neural network to three or fewer. That would be both arbitrary and unnecessary. But just like we have laws that let anyone use hammers for construction and around the house while criminalizing their use as a weapon, we can consider AI as a tool which can have harmful uses and focus on preventing them. Instead of trying to limit its capabilities — which we’re still only starting to truly understand — in the hopes that we can make it impossible to abuse its power, we should focus on how it’s being deployed and why. And in being afraid of the former, the opponents of getting politicians involved in discussions about the future of AI may be hobbling our efforts to lay the groundwork for effectively managing the latter.

# tech // artificial intelligence / computer science / regulation


  Show Comments