By Joshua Skoczylis and John Babalola
The question is no longer whether social media platforms influence politics. They do. The question is whether democratic life can endure when the central infrastructure of public communication is engineered to reward extremism, disinformation, and division — and when its owners are increasingly invested in this outcome.
The far right hasn’t merely found an audience online. It has found a machine that elevates its worldview as a default setting. Nativist panic spreads faster than evidence. Conspiracy travels further than correction. Political violence is reframed as self-defence. Those working in counter-extremism know this landscape intimately. What the field hasn’t reckoned with seriously enough is that the machine itself is the problem.
The Myth of the Neutral Platform
There’s a comforting story we tell ourselves: that platforms are neutral spaces exploited by radicals. Neutrality is a myth. Platforms are built around engagement-maximising systems that privilege whatever keeps users scrolling, sharing, and returning. They don’t ask whether content is good for democratic culture; they ask whether it is sticky. The far right specialises in stickiness. Its narratives are simple, emotive, and endlessly adaptable — the nation is betrayed, enemies are within, outsiders flood in. These claims don’t require evidence to travel. They require only a target and a pulse.
The result is not a sudden radicalisation event. Our analysis of the relevant literature for the forthcoming Handbook of Social Media and Violence (De Gruyter) — available as a preprint at SSRN — identifies what we term a process of mutual exploitation of engagement incentives: platforms structurally amplify extreme content through profit-driven metrics, whilst extremist actors deliberately produce content engineered to trigger those same metrics. Neither the algorithm nor individual choice acts in isolation. Users are nudged from mainstream grievance toward more radical frames because each step is rewarded with visibility — guided into rabbit holes by systems that interpret attention as value and outrage as success.
Engagement as Business Model
Platforms make money by selling attention. As we note in our analysis, the public is not their customer; we are their product, with our data and attention sold to the highest bidder. Content that provokes fear, anger, or tribal triumph reliably generates engagement, so it wins the competition for visibility. Recommendation systems then amplify that winner across millions of screens.
This is why moderation tweaks and fact-check labels struggle to make a dent. Even if platforms removed every explicitly extremist account tomorrow, the incentive structure would remain intact. A business model built on emotional escalation will always find new escalators. Even if platforms removed every explicitly extremist account tomorrow, the incentive structure would remain intact. A business model built on emotional escalation will always find new escalators. Facebook’s own internal research found that 64 per cent of users joining extremist groups did so because the platform’s recommendation algorithms promoted those groups to them. When policy teams proposed algorithmic changes to reduce polarisation, executives declined, characterising the safety measures as anti-growth. The House of Lords’ 2023 report on freedom of expression online put this plainly: platforms shape what is said and seen because design and ranking systems are not passive — they are constitutive of the public sphere. The architecture is political. It is also, from a prevention standpoint, a structural radicalisation vector operating at a scale that individual-level interventions cannot meaningfully address.
The Far Right’s Cultural Advantage
The far right’s rise online is also cultural. Our research identifies how movements have mastered meme warfare, irony, and coded humour to simultaneously spread extremist ideology and maintain plausible deniability. Racism is reframed as a joke; misogyny as banter; conspiracy as entertainment. Humour lowers defences. A teenager encountering far-right memes may not initially recognise the ideological content, interpreting racist material as merely edgy humour rather than recruitment. The use of seemingly innocuous symbols allows far-right communication to occur in plain sight, creating asymmetries where coordination happens whilst evading detection by moderators and casual observers. The style of social media — quick, performative, tribal — is purpose-built for this kind of recruitment. It is, in effect, a scalable onboarding system for extremist ideologies.
It is worth noting an important caveat here. Our own published research on strain theory and far-right extremism found that exposure to online extremist content is neither a sufficient nor a particularly strong predictor of actual radicalisation. The pathway from algorithm to action is neither automatic nor inevitable. But this is precisely the point: the structural amplification of far-right content normalises ideas, shifts the Overton window, and degrades democratic discourse at scale — even where it does not produce individually radicalised actors. The harm is systemic before it is individual.
Whose Freedom?
These platforms don’t simply distort speech. They redefine what freedom means in a digital society. Social media sells itself as liberation — a borderless public forum where anyone can speak. But the freedom being consolidated is not evenly distributed. Platforms are privately owned systems with dominant control over visibility, access, and monetisation. Their version of free speech is organised around corporate autonomy: the freedom of owners and investors to set the terms of public life without democratic accountability.
Elon Musk’s dismantling of content moderation at X and Meta’s 2025 announcement reducing fact-checking are not departures from the platform model — they are its logical conclusion. When platform owners determine that their material interests align with facilitating far-right content, they are not failing in their responsibilities; they are fulfilling the structural imperatives of their business model. The normalisation of far-right discourse is not occurring despite platform governance. It is occurring through it.
The Democratic Stakes
Democracy depends on shared reality, pluralism, and the capacity to argue without tearing society apart. Platform architecture corrodes all three. It fragments reality by sorting people into algorithmic enclaves where different facts and enemies circulate. It rewards the most anti-pluralist content — ideas that deny opponents legitimacy and cast compromise as betrayal. It accelerates institutional distrust whilst offering extremists as the only truth-tellers. When a society can’t agree on what is happening, it can’t govern itself democratically.
Governments are attempting to respond. The EU’s Digital Services Act and the UK’s Online Safety Act push toward transparency and harm reduction. These are necessary. But they are fighting a system designed for the opposite goal. The Online Safety Act, in particular, has already encountered significant difficulties, with VPN downloads surging by 1,800 per cent following age verification measures, and the Act facing criticism for vague definitions and inadequate systems-based regulation. The core contradiction remains: social media platforms are treated as democratic infrastructure but governed as profit-maximising property.
What Prevention Practice Requires
For those working in counter-extremism, this structural account has uncomfortable implications. In earlier work developing the concept of ghost security — where the spectacular performance of governance substitutes for its substance, producing the appearance of protection whilst the underlying conditions of insecurity deepen — we argued that this dynamic is not a policy failure but a political function. Content moderation risks functioning in exactly this way. If the architecture is the problem, individual-level interventions are, at best, harm reduction. At worst, they provide political cover for a system structurally designed to radicalise at scale.
Meaningful prevention now requires confronting platform design as a structural problem: mandatory algorithmic transparency, genuine platform accountability, and governance models that treat digital public space as something other than private property. Alternatives exist — platform co-operatives owned by users, public-interest digital infrastructure accountable to democratic institutions, and mandated interoperability that prevents monopolistic control of communicative infrastructure. These ideas sound radical only because we have normalised an arrangement that should be unthinkable.
Democracy is not guaranteed by good intentions. It survives only when the systems that organise public life are built for democratic ends. Right now, the central machinery of our public sphere is owned, optimised, and increasingly politicised by a small class of private actors who profit from fracture. Regulatory tinkering cannot resolve this contradiction. The choice is straightforward: either we democratise the infrastructure of speech, or we accept a politics shaped entirely by those who already own everything else.
Joshua Skoczylis is Senior Lecturer in Criminology and Counterterrorism Studies at the University of Lincoln.
John Abiodun Babalola is a PhD researcher at the University of Lincoln.
Their chapter ‘Amplified Hatreds: Far-Right Extremism and the Algorithmic Infrastructure of Social Media’ is available at SSRN and is forthcoming in the Handbook of Social Media and Violence (De Gruyter).