By Ryan Ratnam
Introduction
Earlier this year, numerous Instagram users took to scrolling through the Reels function of the platform as usual, only to be met with extremely violent and graphic content, ranging from animal abuse to human murder. Meta apologised, saying there had been an algorithmic error which had now been rectified. However, this exposure is impossible to reverse.
Meta has been criticised previously for poor moderation, something which increasingly lacks incentivisation under a Trump-era – Big Tech nexus which abandons third-party content- and fact-checking. Consequently, contemporary research increasingly presents mainstream social media platforms as gateways to extremist ideologies and violence. For example, young boys are shown gun violence on YouTube more than other demographics. Beyond trauma, interaction with violent content presents alarming violent extremism risks, stoking desensitisation and fascination with violence, which is a rising: Mixed, Unclear, and Unstable motivated extremism. Most recently, the Southport attacker (2024) displayed an intense interest in the aesthetic of violence and routinely watched extreme gore content.
The following blog post describes nascent and ongoing research, which is motivated by observations of Instagram accounts posting violent content, also attempting to deplatform individuals to more fringe spaces like Telegram. As such, this research employs experimental methodology to investigate the content of violent images and videos, and to map the gore ecosystem, which could help describe mechanisms of, and malicious actors behind, gore circulation.
Gore: Profitable Pain
Violent online content is sticky, in that it provokes engagement (sharing, returning, etc.) through stirring extreme emotions. Therefore, violent content is profitable, disincentivising moderation by platforms beyond moral and legal obligations. In fact, algorithms already maximise profit, funnelling sensitive content to children. A study by the Youth Endowment Fund found that 70% of teenage British children had been witness to real-life violent content on mainstream social media. Alongside protecting children from traumatic material, witnessing violent content at an impressionable age risks a numbing effect, in which violence is normalised or even fascinating to children.
Research on gore is incipient, given the recency of the problem, and difficulties (including personal aversion) in studying extreme violent content. Despite this, an extensive report by Vox-POL (2025) found that gore-specific websites can receive up to 14% of traffic from mainstream social media platforms, and gore content is utilised extensively in Extreme Right Wing Terrorism.
Research must address the actual content of gore material (including presentation and topic), as well as the ecosystem in which it spreads. Consequently, this study has the following two research questions:
RQ1: How is violence presented in short-form content?
RQ2: How do profiles organise content to funnel users to other platforms?
Methodology
This study constructs a two-part experimental design on TikTok, Instagram Reels, and YouTube Shorts (given their scrolling affordance). First, violence-seeking profiles, which search violence-seeding terms, investigate whether/how violent content is recommended. Then, new profiles will be set according to different demographics (such as age and gender) whilst interaction is controlled to explore whether some profiles are recommended more violent content than others.
Violent material encountered is collected and subjected to content analysis to address RQ1: how content is presented (including quality, overlain effects, captions, etc.); and the victims’ demographics (age, race, gender, etc.). Regarding RQ2, an ethnographic approach maps the gore ecosystem, following outlinks present within account bios and comments to fringe communities. This may be combined with bots, which can map a user’s network once central actors have been identified.
I restrict this study’s scope to three case studies: car crashes; shootings; and accidents. War is excluded given accompanying political motivations, which are difficult to control. As of yet, preliminary results have only been collected for violence-seeking profiles and RQ1.
Gore research must ensure both researcher welfare, and respect and dignity for those included in content. This material displays intense pain, suffering, and even death, which should be remembered when conducting scientific research which may risk removing emotion to render content as data points.
Preliminary Findings
Platforms responded differently to violence-seeking profiles. TikTok banned these profiles, some within 48 hours. However, Instagram and YouTube did not issue bans. Additionally, they seldom blocked violent terms, with Instagram only introducing friction on the word death, redirecting the user to anti-suicide resources. Instagram actually recommended accounts which exhibited violent content in the stories feature to violence-seeking profiles, and the Reels function was quick to present this content.
Instagram’s content warning was not specific to the video itself, but more to the account. A video featuring an Instagram-issued content warning usually came from an account where most other videos were also subject to this. Yet, an identical video on an account with fewer content warnings was less likely to have said warning, despite posting similar levels of graphic harm. Whilst YouTube shorts also featured violent content, very graphic violence was often presented as a cartoon, or an anatomical graphic (akin to illustrations in a medical textbook), which portrayed detailed gore, yet through computer-generated animation.
Regarding the content of violent material, videos on mainstream platforms were predictably unclear and pixelated, often featuring numerous jump-cuts. Meanwhile, on Telegram group chats, content featured a much higher definition. As of yet, few videos on fringe platforms feature the watermarks of gore websites (a finding from the Vox-POL report).
Purposeful harm (shootings) was often gendered and racialised in that victims were predominantly people of colour (especially Black men) and women. Additionally, TikTok frequently featured male-on-female domestic violence. Accidental harm was less specific to individuals, yet car crashes featured more female victims.
Conclusion and Future Avenues
This research is ongoing, and current findings are only preliminary, given the constraints involved in gore research. However, violent content may have gendered and racialised dimensions, even if harm is accidental. Platform moderation varies significantly, with TikTok intervening more than Instagram. Future avenues should focus on finishing this research: completing both experimental stages and mapping relevant central actors to assess motivations behind gore proliferation. Doing so could help to curb the rising tide of violent extremism, which is devoid of ideology beyond fascination.
Ryan Ratnam is a research lead at the Oxford Computational Political Science Group (OCPSG), a research assistant at Penn State University, and a recent master’s graduate from the Oxford Internet Institute (OII). His research focuses on digital harms, including violent content and the Manosphere.
This blog post is part of a series featuring contributions from presenters at the VOX-Pol Next Generation Network Conference 2025, held at Charles University in Prague, Czech Republic.