Exploiting the Algorithm: How British Extreme Right-Wing Individuals and Groups Leverage Grok and Generative AI for Malign Purposes

By Alice Sibley and Joshua Bowes

As artificial intelligence (AI) becomes increasingly integrated and embedded into social media platforms, wariness around its harmful exploitation has grown. As previous research has shown, malignant actors, ranging from misogynistic online users to extremists, have exploited AI to spread harmful conspiracy theories, share racist images and disseminate disinformation. Generative artificial intelligence (GAI) in particular has proven to be integral in the creation and distribution of harmful content. GAI models like X’s Grok have become popular tools of malicious use by British Extreme Right-Wing (ERW) individuals and groups, and pose a threat to civil society when used for harmful means. The ERW uses AI-generated content to create propaganda, digital media, images and videos supporting and promoting extreme-right causes.

In December 2024, Grok was integrated into X and became available for users to use for free in the UK. Its integrated functionality, ability to create images and its accessibility has led to more people using Grok on X. This Insight will explore the ERW’s relationship with Grok, in an attempt to understand how such individuals and groups exploit GAI for malevolent purposes, including creating images and videos to support anti-immigrant and white supremacist ideology.

Using Grok to Call for Remigration

On 11 December 2024, Martin Sellner called for remigration images created using Grok after the tool was integrated into X. The term remigration has been adopted by the ERW, such as Homeland Party (HP) and Britain First (BF), as a means of calling for the return of migrants to their homelands. This call highlights that some in the ERW are using GAI to spread anti-migrant and other ERW narratives online.

How the British ERW is using Grok

Posts suggest that the ERW ecosystem is using Grok (both the chatbot and the image generator) on X. Members of HP have used Grok to answer questions about remigration and supporters of Calvin Robinson have created images of him as a warrior. Supporters of Tommy Robinson have posted images on X of him as King Arthur and Robin Hood. He is being painted as a man of the people, a political prisoner, protecting those most vulnerable. More recently, supporters are posting images of him in prison reflecting the plight of Nelson Mandela as a political prisoner.

One of PA’s leaflets looks like it has been created using GAI while on the 3rd of December, Steve Laws asked Grok to produce two images of England without immigration with the comment “Obviously, it looked the same because they’ve not contributed anything significant”. On the 20th of October 2024, Active Patriot posted that he had used Grok to create his new logo. On the 28th of October, he posted the same image saying “love Grok”.

List of 7 Core ERW Entities

This section investigates the use of GAI by 7 ERW entities: Britain First, Tommy Robinson, Patriotic Alternative, Homeland Party, Steve Laws, Active Patriot and Truth Hurts. Britain First is a far-right group created in 2011 by former members of the British National Party. It is an anti-Muslim, Christian group led by Paul Golding and Ashlea Simon. The group has 136k followers on X and has shared AI-generated videos/images to promote their political rallies. Britain First has also shared an AI-created video of British Prime Minister Keir Starmer giving money to refugees on a boat and running away. Tommy Robinson is the previous leader of the English Defence League. He mainly spreads anti-Muslim narratives and has a significant following on X with 1.2 million. Robinson’s admin team are using GAI to create images of Robinson in prison in an attempt to draw emotional appeal and present him as a political prisoner. Patriotic Alternative (PA) is a white nationalist group that is pushing for remigration. Led by Mark Collett, the group has 2k followers on X. Homeland Party is a splinter group of PA who also pushes for remigration. Steve Laws ran as a councillor for HP in the May 2025 elections. Homeland Party has 43k followers and Laws has 100k followers on X. Active Patriot and Truth Hurts are both ‘migrant hunters’ who harass asylum seekers that are living in hotels. Active Patriot has 180k followers and Truth Hurts has 32k followers on X.

Entity

Number of GAI posts

Number of posts

GAI Percentage

Britain First

27

1,192

2%

Tommy Robinson

16

818

2%

Patriotic Alternative

4

212

2%

Active Patriot

7

775

1%

Homeland Party

4

551

1% (potentially)

Steve Laws

4

934

0.4% (potentially)

Truth Hurts

0

182

0%

Note. The table shows the number of all GAI posts between 1 January 2025 or as far back as possible and 9 May 2025. PA: 9 January 2025 to 9 May 2025. Steve Laws, 11 February 2025 to 9 May 2025. Tommy Robinson 22 March 2025 to 9 May 2025.

Britain First had the most content created by GAI with 27 posts followed by Tommy Robinson with 16 and Active Patriot with 7. 2% of posts overall from BF, Tommy Robinson and PA were GAI. Overall, data suggests that the ERW entities analysed are not using GAI much with around 1-2% of their posts recently being GAI-created (or likely GAI-created). However, they are using GAI for different purposes.

Narrative and Sentiment Analysis

These entities are using GAI to spread ERW narratives. Out of 62 GAI images, 25 contained anti-migrants narratives, 16 anti-establishment, 9 anti-Muslim, 4 anti-Starmer, and 2 anti-grooming. This means that 40% of GAI content posted by these entities within the data collection period contained anti-migrant content. This is unsurprising given the push by BF and HP for remigration. Britain First and their leaders are responsible for the two main spikes in GAI posted content since January 2025: 2 February 2025 and 7 May 2025. The spike on 2 February was due to BF posting GAI content to promote their ‘remigration’ rally in Nuneaton. The spike on 7 May was due to Paul Golding and Ashlea Simon posting about their use of the chatbot Grok, as discussed below.

Grok is left-wing and ‘woke’

As previous research has shown, political extremists and white supremacists have claimed that GAI tools like Google’s Gemini are biased and inherently ‘woke,’ or inculcated in a belief system that is biased towards left-leaning political views. On 3 May 2025, Paul Golding posted his first of a string of posts about Grok claiming that its outputs are left-leaning. This post was followed by 2 more posts from Golding on 7 May and a post from Tommy Robinson’s admin on the 27th of May saying “@elonmusk what’s happened to Grok?” when Grok refused to write an article encouraging people to support Tommy Robinson. In response to these posts users commented that Grok was pro-diversity, woke, left-leaning and racist against white people continuing the narrative previously seen with Gemini. They claimed that Grok was a ‘democrat apologist,’ that it was owned by Jewish people, that Golding shouldn’t use Grok because it scrapes information from left-leaning Wikipedia, Hope not Hate and other left-wing news sources and that they should boycott Musk and Tesla. Some users claimed that they don’t use Grok anymore due to left-leaning bias, they stated that ChatGPT was less biased and claimed that Muslims have infiltrated every social media platform.

Implications

Grok and GAI has been used by some in the ERW ecosystem to spread disinformation narratives about migration, refugees, Muslims, the government and political figureheads. As shown above, the ERW has employed Grok to generate provocative and hateful material regarding common right-wing conspiracies and beliefs. These include the idea that Britain has become overrun with migrants, that the UK establishment is enabling and facilitating the large-scale influx of migrants into the country and that prominent politicians are implicated in various fringe political activities. Grok’s ability to deftly create propaganda in the form of text, images and memes enables rapid creation of tailored, inflammatory content. Using Grok, the ERW is able to effectively create photorealistic images and visuals to spread hate, disseminate conspiratorial beliefs and extremist narratives. Such content can resonate with disenfranchised groups and stoke division or radicalise individuals looking to latch onto a political cause, as seen in the case of the Southport riots in 2024, which were galvanised by a campaign of disinformation online. Some ERW groups use memes and visuals to target young audiences with content that conveys itself as humorous or relatable, reinforcing echo chambers where radical views are amplified. On 28 May 2025, it was announced that Grok will be integrated into Telegram increasing access to Grok on the encrypted messaging platform. Telegram has previously been used to spread dis/misinformation and the integration of Grok may further the potential to spread disinformation at scale by different groups.

While some images are labelled with ‘Grok’ in the right-hand corner, most content was not labelled, making it difficult to confirm whether it was made using GAI. Community notes on X are not enough to counter dis/misinformation and could lead to further misleading content as online users will argue GAI content is both real and not. Without a logo to identify GAI content, it is easy to spread disinformation. However, even with this addition, it would be easy to remove any watermark or logo added by the GAI tool, making it hard to counter GAI disinformation in a simple way.

Even though Grok has been used by the ERW to promote anti-migrant, anti-Muslim and anti-establishment narratives, BF, Paul Golding, Ashlea Simon, Carl Benjamin and Tommy Robinson and their supporters have expressed concerns over Grok’s perceived wokeness, regarding its left-leaning position. In some cases, the ERW has referred to Grok’s ‘anti-whiteness,’ concerns that are interlinked with beliefs regarding Google’s Gemini, another AI tool also believed to be woke and anti-white by the ERW. This could mean that Grok is used with less frequency by the ERW ecosystem in the future, limiting its potential attractiveness to spread mis/disinformation and ERW narratives.

Recommendations

In order to prevent dis/misinformation being shared at a rapid pace, X might employ new strategies to challenge the spread of harmful and misleading content. Because of the rapid development of realistic content, detecting AI material with the naked eye is becoming increasingly difficult. Instead of using a watermark that can be removed, X might integrate a mechanism that directly states below images and videos that the shared content was made using Grok or GAI. The UK’s 2023 Online Safety Act is a start to addressing this growing issue, but it fails to target individual misuse of AI tools like Grok. Using clever prompts to bypass content filters, tech-savvy extremists can effectively generate offensive material with reworked inputs and prompts.

The decentralised nature of AI-generated content renders it difficult to monitor, complicating law enforcement efforts. Working with tech companies and industry stakeholders, the UK’s communications regulator Ofcom might work to impose stricter guardrails on tools like Grok to prevent its malign exploitation. Ofcom could expand its efforts beyond the Online Safety Act to proactively engage with AI developers. Such partnerships might encourage stronger efforts to impose voluntary safeguards or restrict high-risk functionalities like photorealistic depictions of violence or hate. Since extremists are able to bypass filters with rephrased inputs, Ofcom could require platforms like X to use natural language processing to flag suspicious patterns in AI content creation, partnering with Ofcom to embolden broader industry application.

The Conversation


The Conversation

Alice Sibley holds a PhD in Politics from Nottingham Trent University and specialises in counter-extremism and online harms. She has worked as a researcher and consultant for organisations including Moonshot, Crisp and the Australian GLBTIQ+ Multicultural Council, with a focus on far-right extremism, Extreme Right-Wing Terrorism, and digital threats. Her academic work is published in journals such as The British Journal of Political and International Relations, The Conversation, Manchester University Press, LSE Blogs and VOX-pol, and she has contributed to several edited volumes on digital extremism and online safety.

Joshua Bowes is an internationally-recognized and published terrorism and extremism researcher with 30 pieces of published work focused on online extremist ecosystems, far-right groups and the confluence of terrorism and technology. Joshua is a member of the Extremism and Gaming Research Network (EGRN), the University of Bath’s Reactionary Politics Research Network (RPRN), and the Global Internet Forum to Counter Terrorism’s (GIFCT) Year Four and Year Five Working Groups.

Image Source: Pexels