Ideology Alone is Not Enough: The Past, Present, and Future of Terrorist Training

By Daniel E. Levenson

In the early to mid-19th century the organizations and ideologues who would form the vanguard of modern terrorism did a remarkable job of leverage emerging technology for both training and operational purposes. This often took the form of experimentation with new (and often unregulated) materials such as dynamite and crude IEDs (Improvised Explosive Devices) called Orsini Bombs, printed pamphlets, and even public lectures and courses on the political utility of dynamite

Twentieth Century Conflicts, The Internet, and Training Camps

The trends begun by the bomb-throwers seeking to remake Russia, the nascent Italian state, and other polities across Europe, continued well into the mid-twentieth century, leading to the creation and spread of printed material such as the Anarchist Cookbook and the Poor Man’s James Bond. Fortunately, these instructional manuals were not perfect, often containing errors or a greater emphasis on ideology than on technical accuracy, which left would-be terrorists to attempt to fill in gaps on their own. At the same time this era, like so many before it, offered options for “real-world” experience, and a number of groups and individuals took advantage of this in the United States, Afghanistan, and other places around the globe.

As internet availability and use spread, terrorists kept pace with its expansion, leveraging both the technical nature of the system and its ubiquity in daily life. Among the group of early adopters in the 1990’s were some of the leading proponents of violent extremism, including somewhat famously, Louis Beam, a prominent figure in white supremacist and anti-government circles. From the late 90’s into the new millennium, activities included sharing philosophical and technical instruction via bulletin boards, then websites, followed by the use of file-sharing sites as a kind of “digital dead drop,” along with social media and online games. While much of training material remained technically imperfect, in a number of high-profile terrorist attacks the internet served in part as a platform for operational education.

The Future is Now: Digital Learning and Artificial Intelligence

One of the discernable trends that emerges among the actions of individuals who were able to effectively gain the skills necessary to carry out a successful attack is that in many cases they engaged in a range of different types of activities and modes of learning. This can be seen consistently across time, geography, and ideology. Looking backwards and forwards it becomes clear that terrorist learning is much more akin to collage than a linear, well-bounded narrative.

This need to fill in gaps and combine different modes of learning seems particularly salient and urgent when considering how AI (Artificial Intelligence) may provide some of the solutions bad actors are seeking to supplement other sources of information. Whilst the notion of a “Terrorist GPT” is often discussed, a more realistic cause for concern is that terrorists will turn to existing AI platforms for answers to questions which on their own may seem fairly benign (and therefore more likely to evade built-in safety measures), but which will allow them to address challenges they face in building IED’s or accomplishing some other technically complex task.

With this in mind, there are several potential points of intersection which warrant close and ongoing observation:

  1. Cyber Coaching – Unlike a stochastic approach to inspiring violent extremism, which may involve simply posting something like a digital copy of Inspire magazine online and seeing who decides to engage both ideologically and operationally, cyber coaching involves the provision of detailed feedback and encouragement between two (or more) individuals. One of the better-known cases of this was chronicled in an excellent article in the CTC Sentinel by Andrew Zammit and there is no reason to think that AI could not act as a kind of force multiplier, increasing the number of “students” a cyber coach may be able to tutor at any given time, as well as improving the quality and response time of the remote training.
  2. Modeling and Simulation – While it is unlikely that digital simulation will completely negate the need for practice in the real world (not to mention the final or near final action of making and using weapons) it is not hard to see the appeal of powerful algorithms for virtual modeling and experimentation. A bad actor interested in getting the correct mixture of precursor chemicals together before they attempt to do so in their basement or garage (and risk losing their fingers or more) might very well find something like AI-enabled augmented reality very appealing as a tool for education and practice.
  3. Rapid Search and Access of Digital Resources – Even for experienced researchers, finding technical information on a new topic outside their own domain can prove challenging. While AI-enabled search is arguably still in its infancy and prone to hallucinations and other errors, as the technology advances it is likely that everyone, both good and bad actors, will be able to use it more effectively to find information which can be put to dual, if not outright malicious, purposes.

Just as it would be folly to think we might be able to somehow “uninvent” dynamite or “unpublish” the Anarchist Cookbook, we can hardly repristinate the internet or the fundamentals of computer science with better protections baked in. We can, however, pay close attention to the three areas of concern outlined above – both tactically for the purposes of early detection and disruption, and strategically in order to create the kind of evidence-based protective measures which can help stop these problems in the first place. Doing so will take close cooperation among stakeholders, including technology companies, government regulators, law enforcement, and others willing to contribute to documenting and incorporating lessons learned into the design of guardrails within AI systems. It may only end up being one small piece of much larger overall counter-terrorism efforts in the digital world, but given the increasing ubiquity and power of AI in our everyday lives, mitigation in this area has the potential to contribute to the larger mission by making it harder for would-be terrorists to develop and improve their deadly skills.


Daniel E. Levenson is a PhD student in Criminology at Swansea University, where he is focused on the intersection of Artificial Intelligence (AI) and terrorism. He holds an MA in Security Studies from the University of Massachusetts at Lowell, and an MLA in English and American Literature and Language from Harvard University. More information on his writing, research. and related work can be found at www.danielericlevenson.com

Image rights: Pexels