Orlando Bellini - Fotolia

AI vs AI: How can we win the fake news battle?

Many internet browsers and social media companies have been forced to take on a new responsibility to combat the dissemination of false information, but can they succeed?

Fake news is in the headlines once again following the arrival of OpenAI’s GPT-2, a text predictor that generates media and social media content after absorbing the learnings of eight million web pages. Its initial release was postponed because of fears over potential misuse as a fake news creator.

Meanwhile, both Google and Microsoft have been using artificial intelligence (AI) in an attempt to counter the threat of fake news by automatically assessing the truth of articles, effectively leaving AI on both sides of the battle – a possible cause of, and hopefully a solution to, the growing problem.

However, the challenges go far beyond the issues it causes to legitimate news reporting. The post-truth era shows no sign of slowing down, with the impact of fake news clear to see. It has been found to harm children’s self-esteem and threatens “the very fabric of our democracy”, according to a recent report from the Digital, Culture, Media and Sport Select Committee.

Many internet browsers and social media companies, as the unwitting hosts of fake news, have been forced to take on a new responsibility to combat the dissemination of false information. Their efforts fall broadly into two camps: automation and moderation. Each comes with its challenges.

Moderation: power vs resources

A high-profile example of moderation is NewsGuard, a browser extension, which caused a stir when it warned that Mail Online, one of the world’s largest online publications, “generally fails to maintain basic standards of accuracy and accountability”. Although the warning was subsequently removed, its comments highlight one of the greatest challenges for moderator-led responses to fake news: subjectivity.

Moderation ultimately relies on a human workforce – in many cases, former journalists whose objectivity and balance should be reliable. But the system is fallible because it leaves room for personal biases and perspectives.

And how does any system handle the issue of fake news infiltrating legitimate publications? Earlier this year, credible news outlets reported that the “Momo challenge” was encouraging children to self-harm and commit suicide. Unwittingly, these publications proliferated fake news and created undue panic. This highlights the scale of the challenge faced by moderation platforms, which currently lack the resources to combat viral fake news stories, especially when published by legitimate media.

Automation: the subjectivity issue

Given the scale of the issue, automation and AI are seen by many as the best way to tackle fake news. Google backs this approach, while the Fake News Challenge (FNC), a grassroots effort, explores how AI technologies such as machine learning and natural language processing can be used to combat fake news.

However, like moderation, any AI assessment tool is underpinned by human-written rules, meaning decisions are potentially subject to the unconscious biases of its designers – so an automated response will not always be effective.

Also, we face an increasing variety of fake news, particularly the rise of deepfake videos, presenting a whole new avenue for the epidemic. Deepfakes are the next iteration of fake news, becoming so hard to detect that even AI-based systems are not always capable of doing so.

There is a significant need to establish robust legislation and boundaries in how far this technology is used, and in the ability to categorically and instantly determine what is fake and what is genuine.

With a rise in the sophistication as well as the sheer quantity of fake news, achieving an effective automated response appears out of reach.

What happens next?

But the battle is not yet lost. More collaboration across governments, social media platforms and internet browsers is needed in combining efforts to combat the proliferation of fake news, possibly through identifying the IP addresses of known perpetrators.

Equally, we as consumers of media must shoulder some of the responsibility in identifying fake news. In France, an initiative in schools is seeking to teach students how to spot false information.

No moderator or autonomous system will ever be completely infallible, so while those in power continue to pursue these overarching solutions, for now we must all be on guard against disinformation and approach news with a healthy dose of scepticism.

Read more on Managing IT and business issues