Since the inception of video games, extremist groups have been able create, modify, and weaponise this media for activism and their campaigns. More recently, however, the emergence of gaming-adjacent platforms (most notably Discord, Twitch and Steam) has become a key organisational tool for recruitment and community-building; something that policing communities all over the world have been grappling with, especially in regard to the potential for extremist content to create radicalising effects that lead to political violence. This article explores how policing communities are approaching extremist groups on gaming-adjacent platforms, what strategies they’ve been using, and the effect this has on extremist activism -both in the online, but also more crucially in the offline, space. Using semi-structured interviews with 13 leading P/CVE practitioners, academic and technology industry experts, and content moderation teams, what the article finds is that third-party policing communities are increasingly using more sophisticated tactics to combat extremist content but that these efforts are being increasingly frustrated by the networked nature of extremism and a lack of robust enforcement at platform level. In the future, this research suggests that more transparency about terms of service enforcement from above and mitigation of toxic extremist ‘adjacent’ cultures from below might help foster more resilience against the prevalence of extremism on gaming-adjacent platforms.