[ weird things ] | jordan peterson’s fan club goes after computer science and big data

jordan peterson’s fan club goes after computer science and big data

Followers of viral regressive ideologues found a new supposed hive of politically correct scum and radical feminist villainy: computer science.
older hacker illustration

Not too long ago, we saw how the members of the “dark intellectual web” tackle a real topic they say demands nuance it isn’t given by the politically correct leftist media and quickly plunge into a Gish Gallop of baseless regressive cliches, pretending their grand insights haven’t already been debunked by just about anyone who isn’t too lazy to do so. Today, we’re going to do it again, but instead of talking about sex with Debra Soh, we’re going to talk about big data as seen by a fan of Jordan Peterson. I’m sure said fan thought he was writing a dire warning about STEM under assault by the Marxist Matriarchy, but in reality, he came across as a screeching pod person who saw a human yet to be replaced by his kind.

You see, the poor dear has his fedora askew because his computer science program is asking him to read a book by mathematician Cathy O’Neil, who he describes not as an expert in the field rightfully cautioning future comp sci and big data majors to be careful what data they use to train artificial intelligence models, but as a blue-haired Occupy Wall Street Black Lives Matter radical femnazi — yeah, his kvetching is about 5% substance, 90% ad hominem, and 5% woe-is-me lamentation — because she had the temerity to suggest that data which may be used to train AI could be biased and exacerbate various social ills.

And because this hysterical neckbeard is so upset that people in comp sci care about bias and accidentally screwing up people’s lives because we didn’t screen the data sets on which we’re basing our algorithms, he can’t see a future in the industry. To which, as a computer science person, I’d like to say good riddance to bad, self-eliminating rubbish. Imagine having to explain to a regressive victim of Dunning-Kruger that maybe, just maybe, using data from areas of the country known for racial discrimination in lending is a bad way to train an AI which will determine whether to approve someone’s mortgage, and could expose the lender to lawsuits because he doesn’t believe that discrimination exists, on top of the moral wrong being committed.

how big of a problem is bias in big data and artificial intelleigence?

O’Neil is very much on base when pointing out that historical data we would use in finance, policing, and the criminal justice system comes with decades of explicit and implicit biases and racism. This is a very well known problem and we have seen cases where using AI trained on such massive but biased data sets made things worse. As mentioned on an episode of the Word of Weird Things Podcast, a prime example is a system which classifies minorities as 77% more likely to reoffend than whites based mostly on their skin color. A ProPublica investigation into this algorithm, called COMPAS, explains the problem in its opening anecdote.

Prater was the more seasoned criminal. He had already been convicted of armed robbery and attempted armed robbery, for which he served five years in prison, in addition to another armed robbery charge. Borden had a record, too, but it was for misdemeanors committed when she was a juvenile. Yet something odd happened when Borden and Prater were booked into jail: A computer program spat out a score predicting the likelihood of each committing a future crime. Borden — who is black — was rated a high risk. Prater — who is white — was rated a low risk.

Two years later, we know the computer algorithm got it exactly backward. Borden has not been charged with any new crimes. Prater is serving an eight-year prison term for subsequently breaking into a warehouse and stealing thousands of dollars’ worth of electronics.

Why does COMPAS seem to get it wrong again and again? Because the data sets used to train it conveyed an implicit bias against minorities, particularly African Americans, who are routinely rated as higher risk even when they don’t break the law again, while rating whites as lower risk as they go on to reoffend. All the AI is doing is repeating an inherent bias of its training without being re-trained to fix its mistakes, and that has a severe downside. By rating people who do end up reoffending and graduating to more serious crimes as lower risks just because of their race, it’s reducing their punishments and giving them the opportunity to plot their next misdeed under the radar.

So, this isn’t just a question of justice, it’s a question of keeping criminals off the streets and a matter of not trusting an algorithm that ended up doing so while being only marginally more effective than a coin flip in predicting recidivism. Now, imagine going back to the programmers behind it armed with real world data showing their software’s bias and receiving a lecture about how anti-white PC Marxist snowflakes are destroying law enforcement and criminal justice instead of a possible solution. Because this is exactly what you’ll get from those who worship the headphones broadcasting Peterson’s rambling about the coming Marxist Radfem Brigades into their ears, along with more criminals being let loose to keep breaking the law.

the near future repercussions of big data gone wrong

COMPAS is just one example of how poor training based around biased data can end up going very wrong when deployed in real world settings. As more decisions are made using artificial intelligence and billions of data points being constantly crunched by cloud servers, the fallout from failing to consider the full effects your code will have will snowball. And as people quickly figure out that the algorithms are actually biased against them based on their skin color, name, circle of friends, or whatnot, lawsuits will come quickly and furiously, and since judges and juries will be dealing with very abstract technology, the process would be a mess for years as the first precedents are hashed out in the courts.

This is why it’s so important for computer scientists to get it right coming out of the gate, to blind the computer from race and other gross inequities, and make sure the algorithms only consider valid personal merits when making their decisions. Even a decade ago, we could explicitly code certain considerations into our loops and blocks. Today, there are too many relationships that can be too complex to enumerate and define, so we rely on statistical formulas to find them in oceans of data. Recognizing that this data may not be pristine and could lead to the problems we just dove into isn’t PC culture invading comp sci, it’s basic professional competence at this point despite the whining coming from the “dark intellectuals’” fan club.

And since we’ve been circling around this point for a while, let’s just bluntly state what should be obvious. Peterson, his fans, and his fellow travelers simply don’t believe that there is actual, real discrimination out there, and that whatever problems or setbacks befal minorities is the fault of those minorities. Their slogan is that they believe in the equality of opportunity, not equality of outcomes, while blatantly ignoring proof that many do not have the same opportunities as others. Of course, there is one exception they will make: young white men. If they’re behind in any way, they’re victims of the radical leftists and need to be dragged across the finish line to success. Well, after they clean their rooms, of course

# tech // big data / computer science / social commentary


  Show Comments