Mainstream risk assessment frameworks such as TRAP-18, ERG22+, VERA-2R, and RADAR largely use Structured Professional Judgement to map individuals against four critical factors; ideology, affiliation, grievance, and moral emotions. However, the growing use of online communication platforms by extremists presents a series of opportunities to complement or extend existing risk assessment frameworks. Here, we examine linguistic markers of morality and emotion in ideologically diverse online discussion groups and discuss their relevance to extant risk assessment frameworks. Specifically, we draw on social media data from the Reddit platform collected across a range of community topics. Nine hundred and eighty-eight threads containing 272,298 individual comments were processed before constructing high-order models of moral emotions. Emotional and moral linguistic content was then derived from these comments. We then conducted comparisons of linguistic content between mainstream left and right political discourse, anti-Muslim (far-right), Men’s Rights (Incel-like), and a nonviolent apolitical control group. Results show that a combination of individualising moral communication and high emotionality separate far-right and Incel-like groups from mainstream political discourse and provide an early warning opportunity.