[ weird things ] | why we can’t, and shouldn’t, police abusive speech on web

why we can’t, and shouldn’t, police abusive speech on web

The bad news is that we can't wipe offensive speech and trolls off the web. The good news is that trolls tend to turn on each other quickly.
knitted voodoo doll

In another edition of people-can-be-awful news following last week’s post about why it’s indeed best not to feed trolls, it’s time to talk about online harassment and what to do about it. It seems that some 72 social activist groups are asking the Department of Education to police what they see as harassing and hate speech on a geo-fenced messaging app, arguing that because said geo-fence includes college campuses, it’s the colleges’ job to deal with it. Well, I suppose that it must be the start of windmill tilting season somewhere and now a government agency will have to do something to appease activists with good intentions in whose minds computers are magic that with the right lines of code can make racists, sexists, and stalkers go away. Except when all of them simply reappear on another social media platform and keep being terrible people since the only thing censoring them changes is the venue on which they’ll spew their hatred or harass their victims. Of course this is to be expected because the internet is built to work like that.

Now look, I completely understand how unpleasant it is to have terrible things said about you or done to you on the web and how it affects you in real life. As a techie who lives on the web, I’ve had these sorts of things happen to me firsthand. However, the same part of me that knows full well that the internet is in fact serious business, contrary to the old joke, also understands that a genuine attempt to police it is doomed to failure. Since the communication protocols used by all software using the internet are built to be extremely dynamic and robust, there’s always a way to circumvent censorship, confuse tracking, and defeat blacklists. This is what happens when a group of scientists build a network to share classified information. Like it or not, as long as there is electricity and an internet connection, people will get online, and some of these people will be terrible. For all the great things the internet brought us, it also gave us a really good look at how many people are mediocre and hateful, in stark contrast to most techo-utopian dreams.

So keeping in mind that some denizens of the web will always be awful human beings who give exactly zero shits about anyone else or what effect their invective has on others, and that there will never be a social media platform free of them no matter how hard we try, what should their targets do about it? Well, certainly not ask a government agency to step in. With social media’s reach and influence as powerful as it is today, and the fact that it’s free to use, we’ve gotten lost in dreamy manifestos of access to Twitter, Facebook, Snapchat, and yes, the dreaded Yik Yak, being fundamental human rights to speak truth to power and find a supporting community. But allowing free and unlimited use of social media is not some sort of internet mandate. It’s ran by private companies, many of them not very profitable, hoping to create an ecosystem in which a few ads or add-on services will make them some money by being middlemen in your everyday interactions with your meatspace and internet friends. If we stop using these services when the users with which we’re dealing through them are being horrible us, we do real damage.

But wait a minute, isn’t not using the social media platform on which you’ve been hit with waves and waves of hate speech, harassment, and libel, just letting the trolls win? In a way, maybe. At the same time though, their victory will leave them simply talking to other trolls with whom pretty much no one wants to deal, including the company that runs the platform. If Yik Yak develops a reputation as the social app where you go to get abused, who will want to use it? And if no one wants to use it, what reason is there for the company to waste millions giving racist, misogynist, and bigoted trolls their own little social network? Consider the case of Chatroulette. Started with the intent of giving random internet users a face with a screen name and connecting them with people they’d never otherwise meet, the sheer amount of male nudity almost destroyed it. Way too many users had negative experiences and never logged on again, associating it with crude, gratuitous nudity, so much so that it’s still shorthand for being surprised by an unwelcome erect penis on cam. Even after installing filters and controls banning tens of thousands of users every day, it’s still not the site it used to be, or that its creator actually envisioned it becoming.

With that in mind, why try to compel politicians and bureaucrats to unmask and prosecute users for saying offensive things on the web, many of which will no doubt be found to be protected by their freedom of speech rights? That’s right, remember that free speech doesn’t mean freedom to say things you personally approve of, or find tolerable. Considering that hate speech is legal, having slurs or rumors about you in your feed is very unlikely to be a criminal offense. You can be far more effective by doing nothing and letting the trolls fester, their favorite social platform to abuse others become their own personal hell where other trolls, out of targets, turn on them to get their kicks. Sure, many trolls just do it for the lulz with few hard feelings towards you. Until it’s them being doxxed, or flooded with unwanted pizzas, or swatted, or seeing their nudes on a site for other trolls’ ridicule. No matter how hard you try, they won’t be any less awful to you, so let them be awful to each other until they kill the community that allows them to flourish and the company that created and maintained it, and allow their innate awfulness be their undoing.

# tech // censorship / computer science / internet


  Show Comments