Last year, UK home secretary, Amber Rudd, urged media platforms like Facebook, Twitter and YouTube to do more to remove online terrorist content. Realising that there was a need to change, these companies set about trying to resolve the problem, and because of their size, they had the resources to do something about the issue.

Smaller platforms, however, may not be in a position to effectively tackle extremist material being posted on their sites, and for that reason are increasingly targeted by terrorist groups like Islamic State (IS).

To help solve this, the UK government has invested £600,000 on an artificial intelligence tool that can automatically detect 94% of IS propaganda with 99.995% accuracy.

The technology, developed by the London-based ASI Data Science, can be used by any platform, and will be offered to the likes of Vimeo, Telegra.ph and pCloud, so that they can begin to eliminate terrorist content from their platforms.

“Over the last year we have been engaging with internet companies to make sure that their platforms are not being abused by terrorists and their supporters. I have been impressed with their work so far,” said Rudd. “There is still more to do, and I hope this new technology the Home Office has helped develop can support others to go further and faster.

“The purpose of these videos is to incite violence in our communities, recruit people to their cause, and attempt to spread fear in our society. We know that automatic technology like this can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exposed to these horrific images.”

How does ASI’s AI work? The machine learning behind the hidden methodology

ASI trained its algorithm using over 1,000 IS videos. For obvious reasons, the company isn’t sharing its AI’s exact methodology, but the BBC reports that, put simply, it draws on characteristics typical of IS and its online activity to detect new uploads.


Professor of Computing at the University of Kent, Ian McLoughlin speculated further on the factors the software may use to make its decisions.


“While the company behind this algorithm is extremely sensitive to revealing any details, we know that it’s based on machine learning technology. From the graphics shown during the BBC interview we can infer that the tool works on a frame-by-frame basis (and is possibly a WaveNet approach). This means it doesn’t analyse a recording in its entirety, but analyses each individual frame of the video,” said McLoughlin.

“For any machine learning system, final performance is related to the inherent ability of the analysis and processing technique, plus the quality and quantity of the training material”

"Because actions and words in a video are very much related to the context of what has happened before (i.e. individual frames are not really important in isolation but in the few seconds of time that forms their context), there needs to be something in the algorithm that ties context of frames together, and that may be key. 


“The biggest benefit of a frame-by-frame analysis would be to detect embedded content, i.e. segments of terrorist propaganda embedded in an otherwise innocuous video. A secondary benefit is being able to operate on real-time data (i.e. material as it is being broadcast),” he added.


“For any machine learning system, final performance is related to the inherent ability of the analysis and processing technique, plus the quality and quantity of the training material. As time goes by, performance is clearly likely to improve.”

Made for smaller platforms, forced on larger platforms

Home Office analysis calculated that IS supporters used more than 400 unique online platforms to push out their content in 2017, while from July 2017 to the end of the year, 145 new platforms were used to disseminate IS material. These types of figures demonstrate the importance of technology that can be applied across different platforms.


"The technology is there. There are tools out there that can do exactly what we're asking for. For smaller companies, this could be ideal," Rudd said to the BBC.

“The technology is there. There are tools out there that can do exactly what we're asking for. For smaller companies, this could be ideal”

Even though the government funded AI tool has been made with smaller platforms and cloud storage sites in mind, the home secretary told the BBC that, with regards to larger platforms, "We're not going to rule out taking legislative action if we need to do it."


The larger platforms though appear to be tackling the matter on their own. Last December, YouTube said it had removed more than 150,000 videos promoting violent extremism, and said its algorithms flagged 98% of suspect videos, while Facebook has said that its own system removes 99% of ISIS and Al Qaeda terror-related content.

False positives and false negatives: The cost of a less ‘open’ internet

In the past, similar tools to the one developed by ASI have been criticised by advocates of an ‘open’ internet, saying such efforts can produce false positives, which means that content that is not problematic ends up being taken down or blocked.


ASI has said the system typically flagged 0.005% of non-IS video uploads, so on a site with five million daily uploads, it would flag 250 non-IS videos for review.


However, McLoughlin points out the danger of a false negative could be more worrying.
“If 94% of videos were correctly recognised with 99.995% accuracy, the big question is what happened to the 6% that were not mentioned? Were those actual terrorist content that would be missed (false negative), or legitimate content that was incorrectly flagged (false positive),” said McLoughlin.

“The system typically flagged 0.005% of non-IS video uploads, so on a site with five million daily uploads, it would flag 250 non-IS videos for review”

“The cost of the former is that something dangerous slips through; the cost of the latter is that a human – who would need to review any flagged content anyway – is loaded with additional work.


“It is important to analyse the errors in any AI system, and this is no exception. However revealing the characteristics of this performance – i.e. which 60 videos are not captured –  and especially revealing what kinds of videos are correctly and incorrectly recognised, would give too many secrets to those who are producing such material.”

Background image courtesy of Thorsten Botz-Bornstein / CC BY 4.0

Share this article