[ weird things ] | why the future of social media is younger, and a lot less social why the future of social media is younger, and a lot less social

why the future of social media is younger, and a lot less social

Tech evangelists promising an internet-powered utopia helped design of social media as we know it. Their plan to unite the world was doomed the minute it met reality.

I first encountered Uber in Ukraine, in the mid-1990s. Of course, the company was still a long way from existing, and the smartphones necessary for its operation were still science fiction. But if you urgently needed a taxi when none were around, or the fare was too high in the post-Soviet republic, you could flag down pretty much any car and negotiate a flat fee with a driver looking to earn some extra cash to make ends meet. What was once a survival strategy in an imploding country plagued by resource shortages and hyperinflation is now hailed as a disruptive business strategy to emulate, despite losing as much as a billion dollars per fiscal quarter. You could say that the gig economy that’s currently eating the post-industrial workforce is just a repackaged survival mechanism with a glossy veneer and a more efficient way to find jobs and get paid.

The same can be said for social media. Social platforms are just the latest, most popular and widespread versions of digital bulletin boards, which have been around since the very dawn of the internet. Facebook is even written in the same computer language as these forums. The only functional differences are simpler interfaces, threading, and pruning features that usually made internet message boards look gaudy. Everything old is new again, just faster and with a slicker interface, and far too many VCs seem far too interested in investing in the next burrito delivery app or managing co-working spaces than rockets and electric cars. The risk is smaller and the ideas are easier to tweak and try again, hoping for virality, or a better UI, or just better press and customer service to make it work next time.

In fact, the innovations introduced by the companies that tend to dominate our discussions about tech are seamless cloud storage platforms which can house absurd amounts of data, and AI-driven recommendation algorithms which we now need to navigate our virtual lives, despite the fact that they can be, and often are, easily gamed. We’ve made huge strides in tools used to write and maintain software, and learned how to do it at scales that would give programmers in the 00s heart attacks if you asked them to figure out how to build and support such systems, but those are hardly a typical person’s primary exposure to technology. They use Facebook, Twitter, Instagram, and a variety of messaging services without thinking about how the tools work, and frankly, they don’t need to.

I don’t say this to insult these companies. They exist to make a profit and if they don’t need to reinvent the wheel to get it done, great. I point it out because understanding that there’s no witchcraft or black magic behind social media and the major tech platforms we use today — only massive scale, throughput, and easy accessibility — is crucial to the broiling debate about how to tame the out of control cesspool of rage and conspiracy theories that threatens to break public discourse across the developed world. We have to realize first and foremost that we’re not dealing with a technical problem, we’re dealing with bad behavior from people and then taking their bad faith defenses of this bad behavior at face value. Even if we somehow ascend to become digital beings in a black hole powered supercomputer at the end of the universe, we’ll still have the same problems with the attitudes we have today.

So how did things get this bad? Broadly speaking, the people who designed social media as we know it didn’t imagine the impact it would have and assumed that its users would be tech-savvy young generations who viewed the web with a sense of optimism. Back then, it wasn’t a huge part of the real world and what happened online frequently stayed online. Fewer people used the internet regularly. Broadband was still relatively new. Conspiracy theories were relegated to forums and websites filled with wild colors, sinister GIFs, and tacky clipart, like giant klaxons blaring to users to steer clear of the pages and their contents, and thanks to that and the initial skepticism of anything new, people didn’t take a lot of things on the web too seriously.

In many ways, it was a very different world, and it makes sense that social media’s developers weren’t designing their platforms defensively. The mid-1990s and early 2000s were still warmed by the utopian glow of the shiny new tool tech evangelists told us would revolutionize the world and bring us all together. And they had good reason to believe that as the web was dominated by younger, more curious users with fewer ironclad preconceptions about the way things were supposed to be, or fellow utopians who couldn’t wait for more people to join them. Think of it as a group of gamers throwing a LAN party with their friends and friends of friends. Of course the atmosphere is mostly jovial and scraps end quickly, with few lasting effects.

But then, Facebook broke the internet. Sort of. By opening its platform to everyone in pursuit of greater market share, it introduced social media to older, less flexible generations who managed to eventually combine smiling photos of their children and grandkids at family dinners with long, racist screeds in the same, whiplash-inducing timelines. Other social platforms had to follow suit and gain as many users as possible, which is good for metrics and business, but also lead to an explosion in users who would use these platforms to spread hate, fear, and abuse. A system for a million users expecting the occasional troll that needs muting or banning is going to have a lot of trouble with half a billion users or more, tens of thousands of whom are out to abuse it every day because it’s designed to smack down a troll or two, not exist under constant siege.

realizing the obvious

Under this continuous assault by bad actors and an influx of older users with calcified partisan loyalties and worldviews, three key problems seemed to solidify. The first and most obvious are recommendation algorithms designed to support users’ confirmation biases, which is a glaring example of the lack of forethought when social media platforms decided to offer news in their users’ timelines. The same code meant to bring you more pictures of puppies, jokes, and fun personal essays was deployed to curate actual, factual, important news and no one said “hey, what if some of the news we end up curating is not true?” With the advent of streamlined, flat designs across the web, and cheap but slick blog templates that could turn the aforementioned gaudy conspiracy pages into professional-looking news sites, it was a real concern.

People don’t want to be told that they’re wrong or have to change their worldviews, especially after a long time cultivating one. Blast them with information saying that they’re wrong and many will leave to seek out something a lot more flattering and affirming their beliefs. If social media algorithms give people what they’ll like, statistically speaking, and lubricate their slide into those dreaded airtight echo chambers where they can be manipulated by scammers, politicians, propagandists, and conspiracy theorists, when they emerge from their laptops, their views will be even more warped and when they interact with others or vote, they’ll be doing it out of a much darker, angrier place, trying to impose unworkable solutions to solve imaginary problems, and ignoring collateral damage as hoaxes and conspiracies to stop them.

Technically speaking, social media companies could absolutely start funding newsrooms, get real editors, and assign them to figure out what’s true, what’s false, and what’s newsworthy, then break up their echo chambers with one fell swoop, making sure a healthy dose of real news is making its way into users’ feeds. Unfortunately, they also understand a lot of users will have a full blown conniption and demand to have their echo chambers rebuilt, especially the grifters who benefit most from their existence. And that need to keep people using the platforms leads to ham-fisted efforts to fact check while frustrating fact checkers, and reviews that manage to still leave timelines flooded with conspiracy theories and hyper-partisan distortions.

With growth becoming the only goal for social media, a second problem comes into focus. Users who were causing trouble on regular basis and fake accounts still padded the metrics by which the market judged these platforms’ success. There was no incentive to purge those accounts or clean up old ones which can be “farmed” and then sold to spam bots or propagandists, allowing millions of them to propagate through the networks and giving them enough sway to game the platforms’ trending and recommendation algorithms by exploiting obvious weaknesses. With no tools to create an effective, always-on bot dragnet, or deactivating user accounts for inactivity to pump up the numbers, the growth-only business strategy created vast shadow networks of bots, trolls, and scam artists who could easily come back even if they got banned. The systems they were exploiting were designed to easily let everyone in, not keep anyone out.

And with so many users, both real and fake, came the third issue. People were spending more and more time and connecting with more and more people for a wide variety of reasons. The social aspect of daily internet use, the ubiquity of internet connections, and finally, the spread of online dating meant that digital friendships weren’t just for “lonely weirdoes in their basements” anymore, and many ended in face to face meetings in meatspace. As a result, people started to feel like these social platforms were their communities, not playgrounds for companies used to mine user data to show better ads, and many began to demand that those who ran them abide by free speech norms and treat the discussions as if they were in a public space rather than a privately owned one.

Now, it’s absolutely worth having a discussion about how much influence private companies should have over free speech, and what to rights their users are entitled. But it starts to become a huge problem when we’re presented with a digital version of the paradox of tolerance, when people demand we tolerate their intolerance under laws that permit free speech and criticism. According to them, not only are social media companies in the wrong when they try to moderate ideas promoting harassment, hatred, and intolerance, the law should protect those espousing these ideas and shield them from the consequences of doing so because in their minds, free speech means they’re entitled to a platform for their views with no repercussions from those they target.

Why were they so concerned, especially if, in their own words, it’s just the internet? Partly it was an unpleasant shock for them to discover that society at large has become much more assertive when dealing with bigoted speech. Meanwhile, social media’s emphasis on tying one’s accounts to real identities combined with the nearly effortless ability of those with too little impulse control to sound off with whatever insulting or hateful thing was on their minds, meant that a lot of newly vocal bigots who identified themselves by name were getting pelted with angry responses. Real world consequences became swift, merciless, and very, very vocal. Paradoxically, while social media enabled racists and bigots to find each other and commiserate about the fact that people they don’t like have the gall to exist, it also made it very difficult for them to be outspoken.

So, under their bad faith argument, the free speech of others must be curtailed because enough criticism causes real world problems for those espousing hateful rhetoric, as is their “freedom of association” when private platforms decline to allow intolerant views and their purveyors. Ironically, this freedom of association is very much the same drum on which ethnic supremacists beat when asked to justify their promotion of discriminatory policies. Their invocation of free speech was weaponized to mean “I get to say what I want, when I want, how I want, and no one is allowed to criticize me too harshly for my liking” against social platforms who don’t want to see users leave, and these platforms decided to use a very light touch in dealing with outright hate groups recruiting new members using their software.

And this leaves us with a dilemma. How do we fix all this? How could we allow the world’s most popular popular video sharing site be a space where pedophiles send each other indirect winks and nudges in comments under videos of kids? How do we deal with the primary social platform for more than a quarter of the world’s population allowing the incitement of genocide to go on for years despite a flood of warnings and complaints, and promising to finally tackle vast swaths of white supremacist content only after a global uproar they could no longer ignore? And how do we let the world’s dominant real-time communication tool be routinely gamed and brigaded by bots and trolls trying to spread rumors, hoaxes, conspiracies, and other disinformation to get it in front of politicians and the media?

social media burnout

Just like a few bad apples spoil the bunch, hundreds of thousands of bad actors among billions of users are wrecking social media platforms that weren’t ready to grow as quickly as they did, are going through massive growing pains, and have effectively painted themselves into a corner, not sure how to tackle a problem they hoped would solve itself. This is where a lot of us would call for government agencies to step in, but that may be a cure much worse than the disease. There’s no reason to assume governments would all police social media in good faith, and could easily weaponize it for propaganda and disinformation purposes, only instead of doing it with bots and trolls, they could just go straight to the source and demand that their point of view be put front and center for users, regardless of engagement or merit.

As we’ve seen during the Arab Spring and Color Revolutions, having a tool that can’t be placed under a government thumb is an invaluable resource for human rights activists and journalists trying to report the realities of the world’s dark underbelly, even if it comes with major caveats. An extreme example of this is Tor, a system developed by the U.S. military to anonymize online communications between intelligence assets, and enable those being watched by hostile authoritarians share vital information away from their prying eyes. Unfortunately, it’s become a haven for pedophiles and is instrumental in online drug and arms trade. We could do the easy thing and try to outlaw it, but as we just noted, this anonymizing technology is can also be a vital tool for good and any other version of it could be as easily misused, recreating the problem.

While regulation sounds tempting, it’s not a good solution. A far better one would be VCs and investors who keep social platforms in business demanding changes and throwing their not insignificant financial weight around. If the emphasis is no longer on growth but on the health of online communities and eliminating bots, trolls, and scams, the companies’ incentives become very different. They could aggressively take steps to reign in their worst users and when given flack for doing so point out that they’re private companies responding to consumer, investor, and market feedback, and are taking the necessary steps to ensure they stay in business. There is no law mandating them to look out for the welfare of the most vocally hateful users, grifters, and scammers benefiting from their platforms’ current states.

But wait, you might object. If these platforms are making money as they are, why would they possibly want to change unless there’s real pressure from above to force them to care? So what if they world is on fire? They’re still getting that sweet, sweet advertising cash. And this may be true. However, as more and more discussions about social media turn from how great it is and how easy it is to stay in contact with friends and family, or promote your business or ideas, to frustrated rants about its excesses, the abuse and toxicity it enables and seldom tries to reign in, and the nasty surprises in finding friends and family to which you looked up are actually angry, terrifying bigots who are getting worse by the day from huffing hate from their timelines, there’s a massive PR problem quickly building over their heads.

In order to stay in business, social media needs a steady stream of engaged users who spend enough time on the site to make it worth the advertisers’ while. If younger generations see it as a swamp consisting of the worst the internet has to offer, overran with trolls, racists, and its news functionality as a concert sang by a tinfoil headed choir, they’re going to run towards more private alternatives and use Twitter and Facebook as sparingly as possible. And this isn’t some theoretical threat. Facebook is already trying to pivot to privacy in terms of its activity logs and messaging to keep younger users who are currently tuning out very quickly. It’s become almost an axiom to say that Facebook is a site for politically overactive boomers to socialize and share articles about how their kids’ generation is just the absolute worst.

Meanwhile, making peace with trolls and bots was a suicide pact beginning to backfire. It’s just that public social media’s demise is in slow motion due to its size and the overzealous devotion of its top users. Sure, people love to be told that they’re right and find supporters, which is what they get on social media. But they also tire of constant anger, sadness, hate, and bigotry, even if they were fine with it. At some point being locked in a parallel reality where the West is doomed, all the kids are Marxist Illuminati MS-13 mind-controlled zombies, and all your friends are losing their minds from grief and rage is enough to make most people worry about their sanity. They’ll want out, leaving the whole place to its worst and most toxic trolls who are making the web a worse place while insisting it’s their God-given, Constitutionally-protected right to do so.

But their power only lasts as long as we accept that we need to go on Facebook or use Twitter. We don’t. Other platforms with better thought out defenses against trolls and bots, built with the explicit assumption that they’ll be assailed constantly by bad actors, can rise in their place. Our social media could become younger, more private, and spread not through engagement and ad-based algorithms that hijack our timelines, but through our friends and colleagues’ curation because we’re seeing what happens when people forget that yes, they will give literally anyone a website, that everything you see on the internet isn’t true, and that just because something looks slick and professional doesn’t mean it can’t be a scam or propaganda.

Social media today has taught us to be less critical and engage more. Social media of the future will have to focus on teaching us to be more skeptical. It will abandon older generations to their echo chambers decrying the slow death of a world they now view with rose-tinted glasses and the impending doom of their way of life, letting them hyperventilate until they get sick and tired of it. It won’t happen overnight. Far from it. But the seeds have already been planted, and if today’s platforms don’t get with the program despite clearly recognizing which way the wind has begun to blow, they’ll be putting an expiration date on their profitability. And one of the biggest tools that can accelerate this trend actually just so happens to be regulation, but of what companies can and cannot do with user data, not the platforms themselves, hobbling the indiscriminate vacuuming of user data for advertisers, which is the key to existing social networks’ profitability.

Anger and chaos may be profitable in this troll-dominated phase of the social web. But people are getting frustrated, and the current steady stream of scandals and ugly revelations will only drive them away faster, accelerating the calls for governments to step in and start mandating limits on what user data can be collected, how, how it could be sold, and to whom, and inspiring deep dives into figuring out just how many grifters, scammers, and deeply unsavory figures have been using social media for questionable ends. This is where we’re now headed, a place where social networks are for old people, and group messaging systems that allow us to keep more control over our data and maintain saner social circles with productive and friendly interactions rule the future.

[ main illustration by Cornelius Dämmrich ]

# tech // internet / privacy / social media


  Show Comments