On February 13th Amber Rudd announced the creation of a new tool to remove extremist content from the internet. However, to focus too much on content removal and takedowns would be a grave error. We are losing sight of the fact that the online space is not just a breeding ground for extremism; it can also be turned against violent extremist groups to reach vulnerable individuals and change their paths. This is why, over the past six months, I’ve run a programme which takes a different approach.
There’s no doubt extremist content is easily available online. A quick search on major platforms for key jihadist propaganda material returns dozens of results, some of which have been available for some time. What’s more, jihadist content has a far shorter lifespan than content published by violent far right groups. Forums, websites and even entire platforms are dedicated to communicating and spreading their white supremacist ideology to legions of sympathisers all over the world – often with no policing whatsoever. And yet the violent far right as a form of extremism is often downplayed or completely overlooked. Amber Rudd’s announcement made no mention of it whatsoever.
For the past six months I’ve spearheaded Moonshot CVE’s work in the United States, and in partnership with the Gen Next Foundation we have been deploying the Redirect Method across the country. It offers a different solution to the problem of violent far-right and jihadist content online: rather than simply taking it down, it reaches at-risk audiences and offers them an alternative message that’s specifically designed to be safe but equally resonant.
The Redirect Method USA used advertising to reach individuals searching for violent extremist content on search engines. In just a few months, we reached 56,000 people with our ads. Of those, we redirected 1,300 away from violent content and towards our alternative message: YouTube playlists showcasing content that counter the main narratives used by violent extremist groups to radicalise and recruit individuals.
Our audience watched over 100 hours of this content – time they might otherwise have spent engaging with violently extremist material. Because removing the content alone doesn’t remove the appetite for it; engaging with the appetite and leading it elsewhere is a more realistic, and workable, form of intervention.
We know our violent far-right audience well and in most cases they are far from naïve. On the contrary, they often come armed with pre-existing knowledge about the white supremacist scene and consume propaganda on a daily basis: white power songs; designs of swastika tattoos; Ku Klux Klan merchandise and regalia; white supremacist literature. What’s more, their queries on search engines demonstrate a level of knowledge and appetite that is unlikely to be discouraged just because the most obvious results have been removed.
Not only are simple takedowns far from being the solution – in many ways, they could exacerbate the problem. The first issue is the inherent subjectivity. Who decides what should be taken down? Any such decision – even if built into an algorithm – is grounded in personal perceptions of what constitutes violent extremist content and what does not. Where’s the line between extremist but not violent? Or incredibly violent, but not extremist? The distinction is not an easy one, which makes it only more likely that the ‘answer’ will be straightforward censorship that suppresses critical thinking and bans all non-mainstream opinion (whatever that’s considered to be).
Even if censorship could be avoided, takedowns alone are not going to solve violent extremism. All they’ll achieve is making extremist content harder to find on mainstream platforms. And while this might dissuade casual searches, the results of our work are clear: it will do very little to diminish the appetite of the at-risk audiences – the people searching who are most likely to take further action. For them, it will simply mean finding what they wanted on a different platform, which itself can yield counter-productive results.
When the “Unite the Right” rally took place last August in Charlottesville, we at Moonshot CVE had been tracking violent far right activity on search engines for over a year. In the days immediately following the event we saw a huge spike in searches for content related to the violent far right across the United States. These searches were not merely looking for information: on the contrary, they were clearly centred around a strong appetite for violent content – content related to killing ethnic minorities, queries indicating a desire to donate to the Ku Klux Klan. Within just a week, more than 20,000 searches were recorded for individuals indicating a desire to get involved with violent far right groups – an increase of 400% compared to averages recorded in previous weeks.
After Charlottesville, there was an unprecedented crackdown on violent far-right content – pages were taken down, websites disappeared, forums were banned. But the data we gathered showed no decrease in appetite for this content. On the contrary, it spiked.
Events like Charlottesville show how offline activity can galvanise online audiences, and how crucial the online space then becomes as a place to engage in dialogue. If this space is completely restricted, what are we left with?
Takedowns have to be coupled with digital messaging and programming. Performed in isolation they will push users towards encrypted apps and unrestricted spaces such as the Dark Web – spaces much harder to police and where, more importantly, it is much harder to reach at-risk people and offer an alternative path.
The views expressed on this article are Ludovica’s own and do not represent Moonshot CVE.