Fake news may have been named by Collins Dictionary as word of the year in 2017, but awareness of, and investigation into, its impact has grown rapidly over the past 18 months.
From the polarisation of society to harming children’s trust and self-esteem, and even threatening “the very fabric of our democracy” according to a recent report from The Digital, Culture, Media and Sport Select Committee; fake news presents major challenges that go far beyond the issues it causes to legitimate news reporting.
With this inexorable rise, many internet browsers and social media companies, as the unwitting hosts of fake news, have taken on a new responsibility to combat the dissemination of false information. Mostly, their efforts fall into two camps: automation and moderation. Each comes with its challenges.
Moderation: The challenge of subjectivity
Perhaps the most high-profile example of moderation is Newsguard. Available on Microsoft Edge browsers for mobile devices, it is informed by the assessment of ex-journalists into a publication’s integrity.
It caused a stir when it flagged up a warning that the Mail Online, one of the largest online publications in the world, "generally fails to maintain basic standards of accuracy and accountability”. While the warning has since been taken down, it does highlight one of the greatest challenges of moderator-led responses to fake news: subjectivity.
Anyone that is charged with assessing the credibility of information is handed significant power. While the principle of using ex-journalists, who historically might have been trained to be objective and balanced, is a sound one, ultimately it uses a human workforce. In other words, a fallible one, subject to personal biases, prejudices and perspectives. For instance, a liberal moderator may decide against a right-leaning publication another moderator with more sympathetic political values might endorse.
“Anyone that is charged with assessing the credibility of information is handed significant power.”
Another challenge is when disinformation infiltrates legitimate publications. Similar to Newsguard’s assessments of publications, Apple News’ approach to curation involves onboarding entire journals. Indeed, moderating individual articles would be almost impossible given the rolling news to which we have become accustomed and the sheer number of publications and blogs on the web, so when a legitimate publication reports on fake news it most often goes unnoticed.
For example, in February, reports that the so-called “Momo challenge” was encouraging children to self-harm and commit suicide was reported on widely by the BBC, the Independent and other leading newspapers. After the story was identified as a hoax, these publications were then accused of spreading scare stories.
Such events present a massive challenge to moderation platforms, as they do not have the resources – and therefore capability – to combat viral, fake news stories reported on by legitimate media.
Automation: A better approach to fake news?
There are some of organisations dedicated to using automation to tackle the fake news phenomenon. For example, Fake News Challenge (FNC) is a grassroots effort that includes 100 volunteers and 71 teams from academia and industry around the world, all dedicated to exploring how AI technologies such as machine learning and natural language processing can be used to combat fake news.
Google is also deploying AI technology to combat the issue with the Google News app using AI to robotically assess the truth of an article before deciding whether to publish or remove it from the platform.
However, it must be noted that any AI assessment tool is underpinned by decisions that once again could be influenced by the internal bias of those designing it. So, like moderator content, it also faces the risk of subjectivity.
“Any AI assessment tool is underpinned by decisions that could be influenced by the internal bias of those designing it.
”Achieving an effective automated response to fake news appears to be increasingly out of reach as the technologies for creating fake content are becoming so sophisticated. Open AI, for example, has decided not to release its new AI system that can write news articles and works of fiction for fear of misuse.
Fake news is also now coming in different types of content.
Deepfakes are realistic videos created using AI-based methods that show people doing or saying things that never actually happened. From creating fake videos of celebrities, to editing what politicians say, deepfakes and other synthesised video content present a whole new challenge to the fake news epidemic.
Is fake news a lost cause?
While we may not yet be winning the battle against fake news and disinformation, it is not a lost cause. With the huge amount of attention that fake news is receiving from governments globally and the pressure that this has put on social media platforms and internet browsers alike, there is no escaping the need to find a solution to this problem. As such, we will no doubt see greater investment made into new technologies to combat the growing threat and a larger pool of organisations attempting to tackle the issue.
Key to this is creating more sophisticated anti-disinformation technologies. If we think back to the early days of email, we were plagued by spam messages. Now, thanks to the significant investment made in filtering technologies, we have much more effective approaches and systems that can identify spam based on the IP address of the sender, as well as certain trends and formats in the content.
It is not foolproof, but it shows how far we can potentially go in resolving the issue of disinformation with the right research and development.
In the meantime, there is no easy answer to the issue of fake news. As tech companies experiment with both high and low-tech solutions, we as consumers of media need to take some responsibility to better identify fake news. Whether that is introducing source literacy in schools to help people spot false information online, as currently being rolled out in France, or taking advantages of the moderator guidance