By Mischa Gerrard
Online extremism research tends to treat gendered AI-enabled harms – such as non-consensual sexual deepfakes and synthetic child sexual abuse material (CSAM) – as peripheral to core radicalisation mechanisms. Yet these harms represent more than isolated safety problems. Rather, they function as early-stage enabling infrastructures through which new techniques of coercion, evidentiary evasion, and networked mobilisation are tested before diffusing into overtly political or violent online ecosystems. This matters for understanding how extremist innovation accumulates and spreads.
Non-consensual sexual deepfakes, conceptualised as sexual digital forgeries, are particularly illustrative. They exploit gendered asymmetries of digital life to erode verification norms, degrade democratic participation, and generate the epistemic instability upon which broader forms of political manipulation depend. The fact that such content now circulates seamlessly across mainstream platforms underscores that this is an infrastructural problem rather than a marginal subculture.
AI-generated child sexual abuse material represents a parallel and more acute manifestation of the same dynamic. Where sexual digital forgeries destabilise trust and participation by rendering women’s bodies falsifiable, synthetic CSAM extends this epistemic disruption into the domain of transnational crime and child protection, severing the link between image, victim, and offender in ways that directly undermine investigative and governance infrastructures.
What makes these environments particularly conducive to extremist innovation is the convergence of four structural conditions: high engagement, weak or uneven enforcement, platform architectures optimised for amplification, and the social devaluation of those most frequently targeted. Together, these conditions lower the costs of experimentation, normalise coercive and manipulative techniques, and allow new tactics to be refined before migrating into explicitly political or extremist domains.
Gendered Synthetic Abuse as an Enabling Condition
Understanding this dynamic requires shifting attention from individual instances of harm to the structural conditions they generate. What matters for extremist innovation is not the specific form of abuse, but the capacities it normalises: deniability, scalability, attribution failure, and the strategic manipulation of uncertainty. For example, synthetic fabrication enables what has been described as the “liar’s dividend,” whereby authentic evidence can be dismissed as manipulated, diffusing accountability and complicating attribution. Gendered synthetic harms are especially effective at generating these conditions because they operate at the intersection of high engagement, weak enforcement, and social devaluation.
Unlike discrete acts of violence or harassment, synthetic abuse strikes at verification, attribution, and truth-finding, producing what can be understood as a condition of ambient insecurity. The harm does not reside solely in any single image or artifact, but in the persistent possibility of fabrication. When bodies become technically falsifiable, reputational harm becomes easier to inflict and harder to contest, denials become more plausible, and the boundary between authentic and fabricated material is rendered socially ambiguous. These conditions are not incidental to extremist activity; they are actively useful to movements that depend on intimidation, narrative manipulation, and the destabilisation of trust.
These dynamics are embedded within mainstream digital infrastructures rather than confined to fringe spaces. The same platform architectures that enable the rapid circulation of sexualised synthetic content – algorithmic amplification, low-friction sharing, and uneven moderation – also underpin extremist communication and mobilisation. In this sense, gendered synthetic abuse functions as infrastructural rehearsal: techniques of coercion, humiliation, and evidentiary evasion are refined insexualised contexts before being redeployed in more overtly political settings, where the stakes are higher but the mechanisms are already familiar.
The Feminised Testbed Hypothesis and the Intimate Security Dilemma
The dynamics outlined above point to a broader structural pattern that can be described as the Feminised Testbed Hypothesis – a term I use to capture the tendency for destabilising digital technologies to mature first in gendered and sexualised environments before diffusing into political, criminal, and extremist domains. This pattern reflects how risk, vulnerability, and enforcement are unevenly distributed across digital ecosystems, leaving certain populations more exposed to experimental forms of harm and control.
Sexualised digital spaces offer particularly conducive conditions for technological and behavioural experimentation. They generate high volumes of engagement, produce abundant training data, and sit at the intersection of technological novelty and regulatory hesitation. As a result, violations in these spaces often provoke delayed or fragmented institutional responses, lowering the costs of innovation for those testing new tools or tactics. The techniques later associated with political disinformation, harassment campaigns, and extremist mobilisation can be refined in environments where abuse is normalised and accountability is diffuse.
This testbed dynamic is reinforced by platform incentives. Recommendation systems optimised for attention rather than harm minimisation amplify transgressive content, while moderation frameworks struggle to distinguish between consensual material, abuse, and synthetic fabrication at scale. These conditions enable iterative experimentation: tools are refined, distribution strategies tested, and user responses calibrated before similar techniques appear in explicitly extremist contexts.
Building on classic understandings of security dilemmas in international relations, the security implications of this pattern are captured by what can be termed the Intimate Security Dilemma. In synthetic environments, insecurity emerges when the intimate foundations of identity – bodies, images, and personal evidence – become falsifiable. When verification collapses at the most personal level, this instability scales outward, corroding institutional trust, evidentiary standards, and the credibility of claims across digital public life. Extremist actors are particularly well positioned to exploit these conditions, which lower the costs of intimidation, grievance amplification, and narrative destabilisation.
Implications for Extremism Research in the Age of Synthetic Insecurity
Taken together, these dynamics suggest that extremism research risks a systematic blind spot when it treats sexualised synthetic harms as adjacent to, rather than constitutive of, contemporary radicalisation processes. When analysis focuses only on overt ideology or explicitly political violence, it misses the upstream environments in which the tools, affordances, and behavioural scripts of extremist mobilisation are first developed.
Recent scholarship suggests that sexualised synthetic harms are often treated as technologically novel aberrations, positioned at the margins of security analysis because they sit at the intersection of taboo content and emerging AI capabilities. Framed as exceptional, exotic, or morally discrete, they are analysed as safety problems rather than as structural signals. This misclassification leaves digital security research inattentive to the infrastructural lessons these environments provide about how coercive techniques mature and diffuse.
If extremism research is to keep pace with synthetic technologies, it must look earlier, not later: to the gendered and sexualised digital spaces where new forms of coercion, evidentiary evasion, and manipulation are refined under conditions of weak enforcement and high engagement. This suggests that monitoring efforts, radicalisation research, and platform governance frameworks should treat sexualised synthetic ecosystems as early-warning environments for extremist innovation rather than solely as online safety issues. Treating these domains as peripheral does not contain risk; it delays recognition of how extremist innovation is already being incubated.
Mischa Gerrard is an MA researcher in Violence, Terrorism and Security at Queen’s University Belfast. Her work examines online extremism and technology-facilitated harms, with a focus on gendered radicalisation dynamics. (X: @MischaGerrard)