By Heron Lopes
On March 6th, the EU DisinfoLab will host a webinar discussing the findings of “Melodies of Malice”, one of the papers featured in this blog post. For more details and registration, visit: EU DisinfoLab Webinars – Melodies of Malice.
Introduction
Research on extremism and counter-terrorism has long underscored the role of far-right and extremist music in reinforcing extremist ideologies and serving as a gateway to radicalization. With the rise of generative AI music platforms like Udio and Suno AI, online extremists have gained new tools to create and amplify hateful music with unprecedented ease. Reporting by The Guardian this summer revealed that Suno AI-generated songs circulated within Facebook groups may have contributed to a recent wave of far-right violence in the U.K. Against this backdrop, this blog post highlights some findings of my research for a Global Network on Extremism and Technology (GNET) Insight and a report for the International Centre for Counter-Terrorism (ICCT), in which I investigated how far-right communities have used AI music platforms to reshape the landscape of online music propaganda. By collecting and analysing relevant interactions on 4chan’s politically incorrect (/pol/) board, where users generate and spread hateful music through AI, I sought to understand how the online far-right is misusing AI music platforms to fuel online extremism and the risks posed by it.
The misuse of music AI platforms by online users
Following the release of generative AI music platforms in late 2023, 4chan’s politically incorrect board (/pol/) has emerged as a hub where users collaboratively produce hateful and propagandistic music. Thousands of posts have been dedicated to generating music propaganda on the website since the release of Suno AI, with thousands more comments and music files. These threads openly encourage users to “weaponize and influence listeners” across the internet. Some threads are ideologically focused, centring on themes such as “anti-Semitic music” or “racist music”, while others are dedicated to generating various forms of propaganda music. As one user describes, the aim is to “make songs, make videos for them, upload to YouTube” and leverage platform algorithms to boost reach, thus increasing the potential for radicalising audiences across social media.
Above all, extremist AI music threads provide a space where ideologically aligned users experiment with tech platforms, post results, discuss technical challenges, receive feedback, and refine their creations, while gradually increasing the quality of their hateful content. The threads contain detailed instructions that allow even the most amateur user to produce high-quality music propaganda in a matter of minutes. When more expert help is needed, experienced users offer newcomers tailored guidance and tips on overcoming technical obstacles. They share links to their audio files, seeking feedback on results, often detailing the prompts and platforms used in the process. In this way, users can learn from each other and successfully generate hateful content by way of collective effort.
The themes of these hateful songs vary widely—from racist content targeting Black, Indian, and Haitian communities to anti-Semitic, conspiratorial, nasheeds, and even terrorist anthems inciting attacks on crowded spaces. Moreover, the content is generated and shared in several languages, including English, German, Portuguese, Arabic, and Dutch, indicating the global scope of this phenomenon. This is made possible by the vast array of languages supported by these platforms—Suno AI can generate songs in more than 50 languages. Users misuse these platforms by infusing their harmful lyrical content into diverse musical styles, from rock and country to classical music. They frequently share multiple versions of the same hateful lyrics across various genres, aiming to reach and appeal to a wider audience. By categorising these tracks by genre (e.g., rock, country, pop, classical), they can guide listeners toward their preferred style, ensuring the spread of hateful messages under a more familiar musical guise. This marks a concerning evolution as radicalising music content, once confined mostly to niche genres like far-right rock, can now be generated to emulate any type of music styles, including mainstream, which appeals to individuals across various regions and age groups.
While harmful users are successfully misusing AI to generate hateful music, their efforts do not go unchallenged, as platforms’ Terms of Service (ToS) prohibit the generation and dissemination of hateful and terrorist content. Such policies are enforced via automated detection measures, which pose significant barriers to harmful users by blocking their attempts to generate harmful content and, upon multiple identified violations, banning their accounts. These measures, however, often prove insufficient, as users trick AI bots into violating platform policies by entering prompts that use phonetic tricks and coded language, a tactic known as “jailbreaking”. For phonetic jailbreaking, users prompt the AI to sing words like “nih-gur”, “gae”, and “nickers”, which sound like harmful slurs but evade automated detection. Coded language is another tactic, where users use seemingly neutral terms that contain hidden meanings in extremist circles—terms like “zogbots” for police officers, “octopus” for Jewish people, among others. This allows users to produce harmful songs while sidestepping detection filters and safety guards put in place by tech platforms.
Risks posed by the misuse
The amplification of hateful songs by AI represents a dangerous trend and offers a number of societal risks. Generative AI platforms enable both novice and experienced users to produce high-quality songs that reinforce their radical ideological beliefs, deepening their engagement within extremist echo chambers. Research shows that hateful songs not only strengthen bonds and boost morale among those already committed to extremist ideologies, but also serve as potent vehicles for spreading violent ideas and radicalising new individuals. By packaging radical beliefs into catchy, seemingly harmless melodies, these songs simplify and condense extremist tenets, making them more accessible and attractive to younger audiences by disguising the actual extremist content.
Hateful content may be weaponised by extremists attempting to influence public discourse by deliberately exposing broader audiences to hate content and mainstreaming extremist ideas, a tactic known as ‘strategic mainstreaming’. In several discussions analysed, users celebrate when they see their content spreading across mainstream social media, with one user noting, “Can’t you see? Our content is all over [the internet]. Thank you, 4chan, for helping to test the tech over the last two years (elevenlabs, Suno, etc.)”. By moving to platforms like YouTube, TikTok, and Spotify, hateful content gains wider visibility, with an increased risk of radicalising individuals who encounter it—especially youth and young adults, who studies show to be more vulnerable to radicalisation through music.
Conclusion
4chan users are weaponizing generative AI music platforms by fostering an extremist online community where they can freely learn to manipulate these tools to generate and disseminate hateful songs online. Although the ToS tech platforms prohibit the creation and distribution of hateful and violent content, users have developed ways to bypass safety features that enforce platforms’ ToS, often by playing with phonetics or using coded language in the lyrics, a tactic known as “jailbreaking”. Once successfully generated, hateful songs are often uploaded onto mainstream platforms in an attempt to increase visibility, radicalise individuals exposed to it, and mainstream extremist views.
IMAGE CREDIT: PEXELS
Want to submit a blog post? Click here.