By Kate Scott
On December 6, 2024, the intelligence agencies of the Five Eyes alliance—Australia, Canada, New Zealand, the United Kingdom, and the United States—released a jointly authored report outlining the increasing concern of violent extremism among youth. The report sheds light on how extremist content online is being used to recruit, groom, and mobilize minors into violent ideologies.
One such case included in the report included an Australian minor, aged only 14-year-old, charged with advocating terrorism. The youth had posted online detailed plans for a school shooting and had suggested that they had access to firearms and explosives to maximise casualties. However, the case was not isolated. Australian Security Intelligence Organisation (ASIO) Director-General Mike Burgess emphasized the growing threat, stating, “Around twenty per cent of ASIO’s priority counter-terrorism cases involve young people. In every one of the terrorist attacks, disruptions, and suspected terrorist incidents in Australia this year, the alleged perpetrator was a young person.”
This grim reality suggests that a deeper examination of the intersection between social media, youth behaviour, and extremist networks, is critical in shaping competent policy response.
How Does Social Media Radicalize Youth?
For many young users, social media serve more than entertainment — in a world that is increasingly online, these platforms act as the primary spaces for identity formation, social interaction, and self-expression. As such, these spaces are critical in shaping young people’s political and ideological beliefs.
For young people who may otherwise be insulated, social media allows extremist content to be easily accessible. Algorithms on platforms like YouTube, Instagram, and TikTok play a significant role in the radicalization process. Designed to maximize engagement, these algorithms often recommend progressively more extreme content, drawing users into discussions where harmful ideologies are normalized. For instance, a study has shown that YouTube Shorts frequently promote misogynistic and alt-right content, exposing users to more extreme videos within a relatively brief timeframe.
Extremist content thrives under social media algorithms, which amplify and privilege sensationalist material. Recognizing this, extremist groups and influencers will often weaponize irony and humour as a means to scale up their viewership whilst also safeguarding themselves from criticism, masking explicitly misogynistic, racist, and violent messaging as simply jokes or memes. In doing so, these online communities normalize and desensitize young viewers to violent rhetoric. Such was evident in 2023, where Nico ‘Sneako’ De Balinthazy, a far-right influencer was caught off guard by young child fans repeating his vitriol, calling out “F*ck the women!” “F**k gays,” and “All gays should die” as a means to impress him. Sneako responded to the video dismissively, suggesting that the violent hate speech espoused by his young fans was tolerable as “They are children and obviously joking’’.
Similarly, extremist forums and groups are adept at exploiting the psychological vulnerabilities of young people, such as feelings of alienation, identity crises, or anger toward authority. Under the guise of misinformation, satire, or motivational messages, these online communities create content that resonates with these emotions and grievances, making disillusioned youth feel seen and valued. This process can make extremist ideologies feel like a natural extension of a young person’s identity, making disengagement incredibly difficult.
The Band-Aid Solution of Regulation
Governments and social media companies across the globe have responded to the rise of online extremism with various regulatory measures. Content moderation, the most widely used tool, involves removing harmful material from platforms. While this approach can reduce the visibility of extremist content, it is inherently reactive. Efforts to police online speech can be framed by extremist groups as evidence of governmental overreach or suppression, reinforcing the narratives they use to recruit followers. For example, in 2022, Andrew Tate was deplatformed from Meta, TikTok and YouTube for sexist language and encouraging violence. In spite of this, his content is still widely accessible, due in no small part to his large fan following, who repost hundreds of clips of Tate’s content across social media daily.
Moreover, extremist groups have become adept at evading detection, using tactics like coded language, shifting to encrypted platforms, or exploiting less regulated platforms to continue their activities. As a direct result of over-regulation, several alternative social media sites have emerged in recent years in response to moderation and censorship laws – creating unmoderated platforms that often espouse hate speech and violent extremism. Such was evident in 2018, whereby the subreddit r/incel – a community page for ‘involuntarily celibate’ young men – was removed from Reddit for inciting violence against women, leading to the creation of significantly more aggressive and unmoderated incel forums. As such, the over-policing of extremist discourse online often resembles a game of “whack-a-mole”; where content is removed only to reappear elsewhere.
Australia’s Social Media Ban
In an unprecedented move, Australia recently ruled to ban children under 16 from accessing social media platforms. The Australian government has argued that this will protect vulnerable young users from harmful, extremist, and inappropriate content.
However, the policy has faced significant criticism. Age verification systems, a cornerstone of the ban, are notoriously difficult to implement effectively without infringing on users’ privacy. Despite the fact that minors, who are digital natives, are likely to find the obvious ways to bypass restrictions through VPNs or fake accounts – the main concern is that this restrictive policy may funnel young people towards alternative unmoderated platforms – where extremist content and hate-speech thrives, rendering the policy not only ineffective but could also worsen youth radicalisation.
Additionally, the ban risks excluding minors from positive online communities that provide support, education, or a sense of belonging. For young people already feeling isolated or misunderstood, losing access to these spaces could deepen their vulnerabilities, making them resentful of established institutions and susceptible to extremist recruitment. As pointed out by The UN Committee on the Rights of the Child, “national policies should be aimed at providing children with the opportunity to benefit from engaging with the digital environment and ensuring their safe access to it”. Not prioritising one element over the other.
Moving Beyond Regulation
In response to Australia’s proposed ban, more than 140 experts, wrote an open letter to Prime Minister Anthony Albanese suggesting that “a ‘ban’ is too blunt an instrument to address risks effectively.” Against the backdrop of youth radicalisation, simply banning young people from social media rather than engaging in productive social media reform risks funnelling minors towards the same extremist communities the government aims to protect them from. Ultimately, a more holistic regulatory response is needed —one that combines governance with proactive measures in education and community support.
Enforceable regulations, transparency, and defined requirements are important cornerstones for ensuring that social media platforms provide their digital duty of care for young people. Nonetheless, this is not an exhaustive policy solution. For example, there is also a need for education. Young children should be taught media literacy, critical thinking, and online safety before engaging online. Thus, equipping them with the tools to recognize and resist misinformation, and ensuring that minors are building resilience against extremist narratives.
Furthermore, community-led initiatives, such as mentorship programs, peer support groups, and extracurricular activities, can provide a sense of belonging, community, and purpose. Removing risk factors for radicalisation. By moving beyond a reliance on regulation and addressing the underlying factors that contribute to radicalization, we can create a safer digital environment for young people—and prevent the future generations from becoming radicalised online.
Kate Scott is a PhD candidate in Social and Political Sciences at the University of Sydney. Her current research focuses on deradicalisation within the manosphere, specifically examining the processes that enable or hinder individuals in disengaging from extreme Red Pill Communities. She is interested in the intersection of violent extremism, radicalisation, and gender within digital spaces.
IMAGE CREDIT: PEXELS
Want to submit a blog post? Click here.