In December 2016, National Action became the first extreme right-wing group to be proscribed as a terrorist organisation in the UK. This, however, did not stop them from continuing their operations under new names, e.g. ‘Scottish Dawn’ and ‘NS131’, both of which were more recently banned by the Home Office.
In a previous blog post, ‘The UK Extreme Right on Twitter’, we highlighted two important gaps in the struggle against extreme right-wing activities on social media and the wider surface web. First, the use of servers overseas, especially US servers guaranteeing First Amendment free speech protections, can make it extremely difficult to remove content from the Internet. Second, Twitter had not suspended National Action’s old Twitter account, merely opting to withhold it in France and the UK, which meant that its content was still available to anyone using a Virtual Private Network (VPN), easily bypassing the Twitter ‘withholding’ policy.
More recently, Twitter has updated its rules about abusive, hateful and violent content, which have been in force since Monday, 18 December. One of these new rules targets explicitly-hateful conduct, stating: “You may not promote violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease”. Furthermore, accounts that express affiliation with groups that use or promote violence to further their interests will be suspended. However, hateful images, such as the national socialist swastika, will only be hidden from users by showing a ‘sensitive media’ prompt.
Enforcement of these new rules led to the suspension of the accounts of two leaders of a far-right group in the UK, Britain First, as well as the group’s official account (@BritainFirstHQ). The leader of Britain First, Paul Golding (@goldingbf), and his deputy, Jayda Fransen (@jaydabf), cannot use their existing Twitter accounts, nor can Twitter users view past tweets from these accounts on the Twitter website (although previous tweets are available in cached versions of the respective user pages). This new approach thus represents an improvement of sorts in Twitter’s efforts to remove hateful content from the platform, efforts that have continued to be criticised today in parliament.
Following the rules-change, we repeated our previous effort to locate National Action-related content on Twitter using a VPN. It appears that Twitter has similarly suspended this account (although, like the accounts noted above, cached content is of course still visible online).
While the enforcement of Twitter’s new rules can be interpreted as a step forward in the campaign to remove hateful content from the platform, it is likely that we will see reactive adaptations in the social media-related and other internet activities of extreme right-wing groups. It is possible, for example, that suspended individuals or groups might file an appeal against the suspension of their accounts. Therefore, it comes as no surprise that Britain First has started a petition to demand their accounts to be reinstated.
With less effort, individuals and groups could simply create new accounts, although maintaining a similar content-posting profile would undoubtedly lead to the suspension of such new accounts.
Alternatively, extremists might in future concentrate their activities on a different major social media platform, such as Facebook, or else towards smaller-reach, more lenient social media platforms or the groups’ official websites. Following the suspensions of Britain First Twitter accounts, interestingly, Britain First’s Facebook page was trending on Facebook last night, most probably a side-effect of yesterday’s crack down. Likewise, the group’s YouTube channel remains online, which raises important questions in terms of coordinated efforts by social media companies.
Britain First on Facebook and YouTube
Tech giants, such as Facebook and Google, have significantly improved their efforts to tackle violent extremism online by hiring more staff to remove extremist and other hateful content from their platforms. Today, Facebook told the Home Affairs select committee that it had now a team of more than 7,500 staff, while Google is nearing 10,000. The number of staff working on removal of extremist content at Twitter remains unknown. While hiring more staff is necessary and productive, social media companies should also cooperate to establish a common set of rules to prevent groups like Britain First from simply ‘migrating’ from one platform to another after a suspension.
Improved coordination between major social media platforms might increase the effectiveness of efforts to remove extremist content from social media sites. But with the vast array of available platforms and the ease of creating new accounts, the ‘Whac-A-Mole’ spiral of suspension, creation of new accounts, and renewed suspension, is likely to continue. This serves to reinforce the lesson that social media companies should be vigilant in monitoring the effectiveness of their respective site rules and that there must be continuous research new developments in extremists’ use of social media platforms.
This post was written by Lorand Bodo, a researcher at Ridgeway Information focusing on open source intelligence (OSINT) and online extremism. You can find him on Twitter @LorandBodo.
This post was edited by Dr Joe Devanny, programme director for security at Ridgeway Information. You can find him on Twitter @josephdevanny.