Forgetting the basics? Resurgent Islamic State on Facebook

By Sean McCafferty

In recent years there has been suggestion of a tech backslide by major social media platforms, leading to a reduction in proactive content moderation. This has become a significant concern for scholars of online terrorism. This blog post examines a sample of empirical data from Facebook that suggests there is a resurgence of Islamic State (IS) content and community on the platform. In January 2026, I identified 158 prolific pro-IS accounts on the platform; here, I analyse their propaganda output and discuss strategies to address the resurgent phenomenon –the tools and resources to disrupt which we already have knowledge of.

IS Accounts on Facebook: January 2026

As of early 2026, IS supporters have reconstructed an extensive community on Facebook, openly sharing official and pro-IS propaganda through hundreds of accounts to an audience of thousands of followers. The 158 pro-IS Facebook accounts were identified as part of an ongoing piece of research. The accounts likely represent a small sample of users sharing Islamic State content on the platform, as ISD identified Facebook as a key hub for IS in their 2026 report assessing IS online activity. The scale of the problem is better demonstrated through the 158 accounts’ shared followers. The selected accounts have a combined 375,788 followers, an average of 2,378 followers per account, albeit many of these followers may be the same across several accounts. However, one prolific account has 35,000 unique followers. At time of writing (March 2026), all 158 accounts remain active on the platform.

The accounts contained extensive markers that they were pro-IS in their profile pictures, headers, and other profile information. This included 48 profile pictures displaying shows of support for IS through symbols such as the IS flag or seal, 62 profile pictures displaying images of IS militants, and 45 containing weaponry.

Most of the monitored accounts are exploiting Facebook’s new ‘Professional Mode’ feature, to reach a wide audience. The feature allows accounts to become public and share their content to an unlimited number of followers rather than only to their Facebook friends, expanding the reach of their content. The mode is marketed by Facebook as providing users with analytics and content tailoring features, as well as potential monetisation.

The monitored accounts shared official branded IS content, such as newsletters, videos, nasheeds, bulletins, and photosets, with little effort to circumvent content moderation tools. The accounts also shared historic IS content, including videos and images from the height of IS’s territorial ‘Caliphate’ in Iraq and Syria. Many of the items of propaganda, including both historic and current images and videos, were of a graphic nature. The accounts shared links to IS content on other platforms and to core IS online spaces too. Several of the accounts used well-known hashtags, slogans, and the titles of items of propaganda in their posts making the content easily discoverable. Again, there was limited evidence of the need for sophisticated content moderation evasion techniques by the users.

A Step Backwards?

In the aftermath of the 14 December 2025 Bondi Beach attack, where a Hanukkah celebration was targeted, resulting in 15 deaths and 39 injuries, IS celebrated the attack and claimed that their strategy of using social media to inspire self-initiated attacks is effective and unstoppable. The group boasted that this is a low-cost strategy and that only shutting down the internet would stop them. However, major platforms have previously taken proactive measures that significantly disrupted IS content and community on their services.

Following sporadic takedowns of IS’s online content on Twitter, Facebook, and YouTube between 2014 and 2015, mainstream platforms made proactive and sustained efforts to disrupt IS content on their services from 2016 onward. This had a significant impact, forcing the group off mainstream platforms and onto Telegram, and subsequently onto a wide range of alternative platforms.

Since that time, mainstream platforms with large resources have, for the most part, been effective at preventing IS content and community from being re-established on their platforms. However, significant gaps remained in the moderation of content in certain languages. As a result, research has focused on other issues, including, the exploitation of small platforms, the unintended consequences of deplatforming terrorist networks, and a wide range of emerging issues to understand the evolution of the threat.

The openness with which the identified pro-IS accounts and their followers interact and share propaganda is concerning. The tech backslide by major US-based tech companies appears to have led to a reduction in proactive content moderation, a change of content moderation policies, and a significant loss of manpower and expertise. This has brought old issues back to the fore. The resurgence of IS community and content on a mainstream social media platform displays clear cracks in efforts to disrupt online terrorist content.

We Already Have the Tools to Solve This Problem

The current challenges in disrupting the online dissemination of Islamic State content are well understood, such as the group’s stable online ecosystem, reliance on its suite of websites, encrypted messaging platforms, and widespread exploitation of small platforms, particularly file-sharing services. Paired with an increasing renaissance of IS content on mainstream platforms and in tandem with IS propaganda seeking to manipulate global crises and grievances to incite violence, there are today clear gaps in long-established efforts to disrupt IS content.

While the extent of this issue may be alarming, we already have the tools to solve this specific problem. Major platforms like Meta have previously had some success in proactively driving IS content from their services. They have the financial resources, access to expertise, and relevant technical tools. An example of the latter is the Global Internet Forum to Counter Terrorism’s (GIFCT) hash-sharing database (HSD), which functions by member companies uploading digital fingerprints (“hashes”) of content they have identified as terrorist or violent extremist, enabling platforms to quickly detect and remove matching material across their services. If companies are proactively identifying IS content and uploading it into the tool, the HSD should be able to swiftly detect much of the content shared by the 158 accounts described and discussed herein.

These accounts were sharing official branded Islamic State propaganda. The historic content should already be within the HSD and new content should be being added proactively. As highlighted above, minimal or no effort was taken by users to avoid detection, as terrorist content was shared in its original form using well-known branding, logos, and keywords. Other automated tools such as logo detection tools and potential AI-based solutions may also be relevant here as the content could be easily identified through branding and logos.

This suggests that the methods used to drive IS from Facebook in the past simply need to be reapplied. The required response is straightforward, as the means to disrupt IS content on mainstream platforms such as Facebook have already been in place for many years. The resurgence of IS on Facebook reminds us that while we are concerned with the evolution of the terrorist threat, we need to remain vigilant about the basic foundations of disrupting terrorist content online that have been established over the last decade.


Sean McCafferty is a Marie Skłodowska-Curie Doctoral Fellow at Metropolitan University Prague, contributing to the EU-GLOCTER project. He is also a member of VOX-Pol, and the Conflict Institute at Dublin City University. His research focuses on open-source intelligence, terrorism, propaganda, and technology.