[ weird things ] | the six steps we can take to battle trolls, bots, and propaganda on the web

the six steps we can take to battle trolls, bots, and propaganda on the web

Trolls and bots are taking over social media by the millions. But we're not powerless. We can fight back with just a few easy steps.
wow orc
Illustration from World of Warcraft: Warlords of Draenor

According to many users and experts on the internet, trolls are taking over, and so is propaganda being repackaged as slick, high quality news. This is obviously a bleak development, but since we’re talking about the internet here, there is hope. Technology got us into this mess and it can also help a dedicated enough group of experts clean it up beyond the current and all too obviously doomed to fail report buttons and periodic flags on partisan outrage or propaganda stories we know will go viral anyway. To tackle the many pronged scourge of griefers and propagandists, some of whom have graduated to doing this professionally rather than just for the lulz, as they did in the early days of the web, we need a many pronged defense, ideas for which, for the sake of convenience, I’ve put in a numbered list.

01. Nuke the armies of bots and troll accounts

One of the biggest reasons why fake news can spread like wildfire is because there are entire swarms of bots and sockpuppets who share and link to any story being generated by the conspiracy and propaganda ecosystem as soon as it comes out until they reach the true believers and give it traction. These bots have the combined effects of amplifying something that should’ve died in obscurity, and making it seem like there are huge numbers of fellow true believers to those who actually buy into these fake news. To them, instead of the news coming from some fringe site on the internet, where anyone could set up shop and pretend to be a legitimate publication in a matter of just an hour or two while armed with little more than a bunch of text and $50, it’s a dramatic revelation on everyone’s lips. Of course the reality is that a whole lot of that everyone is thousands, if not hundreds of thousands of bots built to fan rumors, not flesh and blood people who believe what they spread.

And social networks have every possible incentive to go after bots. They’re often used for fraudulent reasons, don’t represent real users with valuable information for advertisers, and are often marshaled for click fraud and for artificially manipulating trends and conversations on the platforms. Yes, the bots can make user totals look good, but when they grow to become a very substantial part of your ecosystem, investors will be very upset, as the CEO of Twitter recently found out. We’ve become very good at identifying many bot networks, so much so that universities now hunt them for sport, so any excuse for not acting on this information is hollow. Being able to flag them as they build back up observing their basic behavior and take them down will make creating social media botnets an exercise in frustration, and with bots being little more than a liability, there’s no reason to be cautious when it comes to maintaining a user base composed of real people, you know, the target market for these social networks rather than software calling APIs.

02. Purge old, inactive accounts for privacy and security

If you ever try to leave a social network you may be surprised to find that it will be a very Hotel California-esque experience. You can join but you can’t delete your account. Oh, the site will say it’s deleted, but a single login or a request with your email and suddenly, it’s as if you’ve never left. And that’s kind of a problem because these old accounts can be opened and left more or less dormant for years to build up their credibility for when they will be used as part of a botnet. Likewise, old accounts with compromised and long abandoned emails can be hijacked by hackers and recruited into botnets to steal whatever credibility the human user may have had. Obviously a user gone for six months is hardly generating any value and now poses security risks, so a simple cleanup script scheduled to run once a day would be both easy to implement and much appreciated by users who aren’t coming back and don’t want their information stored while frustrating hackers.

In a way, this is an extension of the previous tactic by choking off a stream of potentially legit, established-looking accounts to be exploited. You don’t want users to gain credibility through inaction and useless information left sitting in your database. Likewise, users should know that an account that’s suddenly spewing propaganda after laying dormant for a long time or just posting meaningless fluff to look more or less active, has done just that, and introduce some sort of social gaming component to reward users who try to add informative content. Very importantly, this content doesn’t have to be a bunch of links to fluff pieces popular with the majority of the user group, it just has to be substantiative, come from actual, respectable outlets or start active, multi-way conversations. In short, cut shitposting off where it starts and don’t give it a hand up by letting it game your system.

03. Really, and finally, crack down on trolls

Imagine you go for a jog through a park and see random people drawing a bunch of swastikas on walls, taking a whiz on a tree in public, fighting with each other while spectators egg them on, or shouting obscenities at anyone who passes by. You would either call the police to stop this behavior, or just refuse to go anywhere near that park again. Trolls on social media pose the same problem and they need to be shown the door when they exceed what most users of a platform agree is egregious behavior. Now, trolls today are extremely entitled creatures who believe that any form of nudging them to the exits is an affront to their right to free speech. But what they seem to be unable to grasp, whether through ignorance or willfully, is that all freedom of speech assures them of is not going to jail for what they said. It does not guarantee them protection from criticism or continued participation in any conversation where they’re no longer wanted. So let’s put them all on mute, since research shows that shadow bans are the most efficient way to stop them, keeping people from feeding them and frustrating their efforts.

Trolls not only poison the conversation, they’re also expensive to deal with for mainstream social networks. Twitter was unable to sell itself to willing buyers primarily because they wanted nothing to do with the hordes of egg avatars and professional griefers doling out constant abuse with absolutely no real repercussions. It’s tempting to brand yourself as the bastion of open and free speech on the web, it really is. And hate speech or swearing aren’t illegal, nor should they be. But social networks are businesses and if people associate them with hateful troll circlejerks, they’re not going to be keen on using them which will cost their owners millions, if not billions down the road. Again, Twitter is a prime example of this, failing to sell itself for a few billion dollars because its potential owners looked at the troll-riddled mess they would be inherited, made a disgusted grimace in their minds and said “yeah, thanks but no thanks” at the unpalatable proposition of having to get their hands dirty cleaning up the platform before monetizing it.

04. Don’t flag stories, flag and track outlets over time

Unfortunately, people can be easily fooled by flashy websites that look as if they’re real news organizations. This means aggregators, who copy stories from others and modify them for clickbait can pose as legitimate, honest to goodness journalists, and sites that traffic in conspiracy theories only need a facelift and a social media page to lure in readers. Fortunately, we do have a number of fact checkers who can not only correct stories, but figure out if a particular source has longstanding problems with the truth. While most of the proposals flying around now flag particular stories, managing the news feeds on such a low level is going to be a game of whack-a-mole and it will take a pattern of stories being flagged for users to finally give up on such an unreliable publication. Instead, let’s make it easy, tally up existing checks on well known problematic sites, track their relationships to new entrants, and issue flags based on that. Maybe the story they published is true, but the big red flag saying this is out of the ordinary for them will encourage skepticism on the readers’ part. Tracking the source’s age would also be useful to warn users that they’re reading a fly-by-night publication with no track record.

Likewise, we should also keep track of something very simple but crucial to the whole news enterprise: corrections. Does the publication ever correct its stories or publish follow-ups where it addresses public fact checks using an established, credible set of sources? What we often mean by fake news isn’t just news you don’t like, but stories that contain massive inaccuracies if not just propaganda that go unaddressed when pointed out, or if they are, the response is just hostility aimed towards the fact checkers and skeptics, not a set of credible links and quotes to verifiable sources. A perfect example is a hit piece on Snopes by the Daily Mail. Instead of pointing out where any of the fact checkers went wrong in reviewing their stories, they tried to drive their readers into a frenzy about the fact that the founders of the site are in the middle of a messy divorce and the site dares to employ a sex worker. If you note, none of this makes the Daily Mail’s stories any more true, and this behavior should get it flagged even more severely in our scenario.

05. Gamify comments and award good reputations

Imagine comments operating a bit like a mini social network. With many publications moving to Disqus, Facebook, and allowing people to link their existing social media accounts to their comments, we’re headed down this road already. But it’s not enough to just give people ease of use and ways to establish a track record through votes. We should encourage users who have a long track record of leaving good, productive comments. We should add prompts to allow people to recognize a comment for adding to a discussion while not explicitly endorsing or upvoting it and use that to aid some of our calculations to award users with productive comments with badges or little digital prizes, It costs you nothing and provides a strong incentive to be an incisive, productive member of the site’s community. Likewise, it also gives us plenty of data points to help predict potentially troublesome posters. An acidic, trollish fly-by-night account could be easily flagged, and we can run some in depth analyses on long term users and persistent trolls to root out botnets or people we’d rather ask to leave the community.

We can use some of the data models and techniques I tried to dive into in a previous post regarding the challenges of using AI to fight online bigotry as sources for additional insight into the site’s community and a way to really test some ideas for how to best manage it. No matter what, trolls have very specific patterns they have to follow to truly qualify as trolls and despite an enormous strength in numbers that overwhelms human moderators, they’ll be at a disadvantage to an AI that can flag many of them and give the mods an upper hand in the end. But in order to make this AI efficient in any way, it will need lots of data and making sure that we have a complete snapshot of a users’ public patterns when training it is a must. Naturally, it will have its rough patches when starting out, but with enough data and edge cases, it can be harnessed to be truly effective and recognize when it’s being trolled, brigaded, or just confronted with a particularly nasty individual.

06. The best defense is an active and continuous one

Suppose you started all your previous steps and saw major improvements in your social network, news community, and feeds. But all this means is that you’ve been successful so far. When it comes to technology, complacency is the precursor to defeat, and, to borrow from Wendell Phillips, our constant vigilance is the price we pay for keeping trolls and propaganda at bay. They will still have their own communities across the web and their own sites for trying to influence or attack others, and that’s fine in the grand, freedom of speech sort of way. They can do whatever they want in their corners of the internet, as they should be able to. The problem is when they decide to sow chaos, preach hate, and harass those in another corner of the web that’s in any way critical of them. If we can repel them enough to establish the once prevalent sense of civility back to much of the web, they won’t go away. And if we know anything about them, they’ll get madder and madder, so once a serious pushback effort starts, it can’t stop. Otherwise the problem will just rear its ugly head once again and return even worse than before.

# tech // bots / journalism / social media / trolling


  Show Comments