[ weird things ] | it takes evil to defeat evil: why researchers built a fake news generator

it takes evil to defeat evil: why researchers built a fake news generator

A new artificial intelligence system called GROVER can create extremely plausible fake news stories. Why would anyone build something like this? Why, to fight fake news of course!
hail hydra

Knowledge is often a double-edged sword. Understanding how to efficiently split atoms to harness nuclear energy to run our electronics and keep the lights on lead to the creation of weapons that could end civilization as we know it. Encryption is vital for e-commerce to exist without us compromising our bank accounts with every transaction, but it can also help hide evidence of crimes which is why so a number of countries desperately want a back door to read our encrypted data. And knowing how to genetically engineer viruses is important in making gene therapy a reality, but can also be used to create artificial plagues for which we’d have no real treatments, making them devastating tools of mass murder.

Math, physics, and biology don’t care about right or wrong, they can’t because they’re abstract concepts. It’s the human with the expertise who understands how to use it for benefit rather than harm. So why would anyone look at the current fake news epidemic and the fact that as some countries are busy finding an antidote, others are rejecting reality altogether, and create an AI called GROVER which is able to crank out a torrent of hoaxes, propaganda, and absolute nonsense in a slick, believable package at superhuman speeds? Doesn’t that seem ill-advised and dangerous? Couldn’t it be misused to crank out a tsunami of bullshit for eager conspiracy theorists who’ll then turn it against their own family and neighbors?

how to build a double propagandist

Well, to understand how to detect fake news, an AI needs to understand what fake news is and how it’s composed. The best way to do that is to teach it to write fake news by feeding it a large training set of real articles from legitimate news sources and human produced propaganda and fake news from sites known for promoting conspiracy theories and misinformation. By using an algorithm known as LSTM, or long short-term memory, the AI learns how both are crafted and the differences between them through a process known as adversarial training. It can then spit out articles filled with phrases it knows should go together until the result looks cogent enough to pass as being written by a human, and convincing at first glance to a casual reader.

This means that when fed a fake news story, it can spot the tell-tale phrases and common lies used by propagandists because that’s exactly what it uses to generate its own hoax articles. In its first iteration, it can flag  fake news with 73% accuracy, and machine written ones with 92% accuracy which means that it’s a) good enough to help human moderators flag a whole lot of propaganda as soon as it’s published, and b) invaluable for finding fake news created by other machines bound to be used by troll farms trying to smother social media in conspiracy theories and nonsense. And this is exactly why the researchers want to release it into the wild: to make it much more difficult to introduce automated fake news generators. Its ability to catch human written frauds is just a bonus.

when reading shouldn’t be believing

The obvious question, of course, is whether GROVER will backfire as the same people who seem unreachable by any form of reason or logic wrap its fake articles in their death grips. After all, a team of human evaluators ranked its propaganda as extremely convincing, and there’s a rather disturbing trend of hoaxes designed to show how easy it is to create fake news shared without a second thought across social media are still shared even after outing themselves. We’re not just fighting disinformation, we’re fighting cognitive dissonance and an innate desire to believe anything that confirms one’s worldview, no matter many times it’s proven to be false and how many independent sources, fact checkers, and even creators say it’s fake. There are walls more willing to engage in an honest to goodness productive debate than those in question.

With this in mind, we can safely say that these people will believe whatever they really want to believe, with and without GROVER. But this tool doesn’t exist to change minds. It’s meant for a social media platform that wants to improve the quality of information on its users’ timeline, or researchers trying to understand the evolving tactics of trolls and disinformation agents across the web. Likewise, while there are certainly diehard conspiracy theorists who will never, ever be swayed to let go of their favorite frauds, there’s also a large group of people who can be fooled by a design that obscures the source of what they’re reading and those are the people we’d have a chance to steer away from fake news hubs just by reminding them not to trust the things they see on their screens.

If we have the ability to quickly flag sites and pages cranking out fake news with an appropriate scarlet tag, we can remind users who just clicking on whatever gets shared with them without a second thought that yes, they do give anyone a site and some of them are just propaganda and frauds designed to get them mad at someone, we can put the seed of doubt into their minds. We can also use a system like GROVER to get an idea of just how much fake news there seems to be so we understand the full scale of the problem and can demonstrate it to both users and those who run the platforms which help them reach billions of eyeballs. Maybe if we can show just how untrustworthy our online news is, we can finally build the momentum to do something about it on social media, in newspapers, and on air.

See: Zellers, R., et. al., (2019) Defending against neural fake news, arXiv: 1905.12616

# tech // artificial intelligence / fake news / social media


  Show Comments