Every internet community has them and many have been killed by them. They crave two things most of all: attention and a platform to broadcast whatever comes to mind, and every time they appear, you can safely bet that someone will admonish users engaging with them not to feed a troll as per the common axiom. But what if, just to propose something crazy here, maybe there are reasons to talk to them, downvote them, and otherwise show your displeasure because an appropriate amount of push back will finally solidify the message that they’re not wanted? They could either leave or give up on their trollish ways. Either way, it would be an improvement. So, following this hypothesis, a small group at a Bay Area college collected 42 million comments on huge gaming, political, and news sites with a grand total of 114 million votes spanning as many as 1.8 million unique users, to figure out once and for all if you can downvote trolls into oblivion or force them to productively contribute. Unfortunately, the answer is a pretty definitive no.
After creating an artificial neural network to gauge whether comments deserved an upvote or a downvote after using the actual discussion threads as a training set, the researchers decided to follow users’ comment histories to see how feedback from others affected them over time. They found that users who were ignored simply stopped participating, which seems quite logical. It’s simply a waste of time and effort to shout into the digital aether with no feedback. But when the computer followed the trolls, the data showed that even withering negativity had pretty much no effect on what they posted or how much. Their comments didn’t change and they did not seem to care at all about the community’s opinions of them. If they wanted to antagonize people, they kept right on doing it. We could say that not every person who provoked a flood of negativity in response is a troll, true. Some of the political sites used in the sample are extremely partisan so any deviation from the party line can provoke a dog pile. But by the same token, while not every maligned comment is trollish, most trollish comments are maligned, so the idea still holds.
With this in mind, how do we police trolls? Not feeding them does seem to be the best strategy, but considering how many of us suffer from SIWOTI syndrome — and yes, I’m not an exception to this by any stretch of the imagination since half this blog is a manifestation of it — and will not let trollish things go, it’s not always feasible. This means that shadow banning is actually by far the most effective technique to deal with problematic users. Because they won’t know they’re in their own little sandbox invisible to everyone else, their attempts to garner attention are always ignored so they get bored and leave. Of course this method isn’t foolproof, but a well designed and ran community will quickly channel even repeat offenders into the shadow banned abyss to be alone with their meanderings. In short, according to science, the best thing we can do to put a stop to trolling is to aggressively ignore them, as paradoxical as that sounds at first blush.