Video |
Hearing: Countering the Virtual Caliphate: The State Department’s Performance
View Abstract
The United States is losing the information war to terrorists like ISIS and Hezbollah. Earlier this year, the administration rebranded the office responsible for counter messaging, but little seems to have changed. A strong, effective information offensive to counter the violent ideology being pushed by ISIS and other terrorists is long overdue. This hearing will give members an opportunity to press the administration’s top public diplomacy official on how the U.S. can be more effective.
|
2016 |
Royce, E. |
View
Publisher
|
VOX-Pol Blog |
Here’s How Radical Groups Like Islamic State Use Social Media to Attract Recruits
View Abstract
|
2016 |
Schumann, S. |
View
Publisher
|
Journal Article |
Hezbollah’s “Virtual Entrepreneurs”: How Hezbollah Is Using The Internet To Incite Violence In Israel
View Abstract
In recent years, Hezbollah has used social media to recruit Israeli Arabs and West Bank-based Palestinians to attack Israeli targets. A recent innovation in terrorist tactics has given rise to “virtual entrepreneurs,” which to date have been largely associated with the Islamic State’s online recruitment efforts. Hezbollah’s virtual planners, similar to those in the Islamic State, use social media to establish contact with potential recruits before transitioning to more encrypted communications platforms, transferring funds, and issuing instructions to form cells, conduct surveillance, and carry out terrorist attacks. Online recruitment presents a low-cost option that offers plausible deniability for Hezbollah. While every virtual plot led by Hezbollah that targeted Israel has been foiled thus far, Israeli authorities spend time and resources disrupting these schemes at the expense of other more pressing threats. By digitally recruiting Palestinians to attack Israel, Hezbollah and its patron Iran are seeking to cultivate a new front against Israel amid rising regional hostilities.
|
2019 |
Shkolnik, M. and Corbeil, A. |
View
Publisher
|
Journal Article |
Hidden order across online extremist movements can be disrupted by nudging collective chemistry
View Abstract
Disrupting the emergence and evolution of potentially violent online extremist movements is a crucial challenge. Extremism research has analyzed such movements in detail, focusing on individual- and movement-level characteristics. But are there system-level commonalities in the ways these movements emerge and grow? Here we compare the growth of the Boogaloos, a new and increasingly prominent U.S. extremist movement, to the growth of online support for ISIS, a militant, terrorist organization based in the Middle East that follows a radical version of Islam. We show that the early dynamics of these two online movements follow the same mathematical order despite their stark ideological, geographical, and cultural differences. The evolution of both movements, across scales, follows a single shockwave equation that accounts for heterogeneity in online interactions. These scientific properties suggest specific policies to address online extremism and radicalization. We show how actions by social media platforms could disrupt the onset and ‘flatten the curve’ of such online extremism by nudging its collective chemistry. Our results provide a system-level understanding of the emergence of extremist movements that yields fresh insight into their evolution and possible interventions to limit their growth.
|
2021 |
Velásquez, N., Manrique, P., Sear, R., Leahy, R., Restrepo, N.J., Illari, L., Lupu, Y. and Johnson, N.F. |
View
Publisher
|
Letter |
Hidden Resilience And Adaptive Dynamics Of The Global Online Hate Ecology
View Abstract
Online hate and extremist narratives have been linked to abhorrent real-world events, including a current surge in hate crimes and an alarming increase in youth suicides that result from social media vitriol; inciting mass shootings such as the 2019 attack in Christchurch, stabbings and bombings; recruitment of extremists, including entrapment and sex-trafficking of girls as fighter brides; threats against public figures, including the 2019 verbal attack against an anti-Brexit politician, and hybrid (racist–anti-women–anti-immigrant) hate threats against a US member of the British royal family; and renewed anti-western hate in the 2019 post-ISIS landscape associated with support for Osama Bin Laden’s son and Al Qaeda. Social media platforms seem to be losing the battle against online hate and urgently need new insights. Here we show that the key to understanding the resilience of online hate lies in its global network-of-network dynamics. Interconnected hate clusters form global ‘hate highways’ that—assisted by collective online adaptations—cross social media platforms, sometimes using ‘back doors’ even after being banned, as well as jumping between countries, continents and languages. Our mathematical model predicts that policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish. We observe the current hate network rapidly rewiring and self-repairing at the micro level when attacked, in a way that mimics the formation of covalent bonds in chemistry. This understanding enables us to propose a policy matrix that can help to defeat online hate, classified by the preferred (or legally allowed) granularity of the intervention and top-down versus bottom-up nature. We provide quantitative assessments for the effects of each intervention. This policy matrix also offers a tool for tackling a broader class of illicit online behaviours such as financial fraud.
|
2019 |
Johnson, N. F., Leahy, R., Johnson Restrepo, N., Velasquez, N., Zheng, M., Manrique, P., Devkota, P. and Wuchty, S. |
View
Publisher
|
Journal Article |
Hiding hate speech: political moderation on Facebook
View Abstract
Facebook facilitates more extensive dialogue between citizens and politicians. However, communicating via Facebook has also put pressure on political actors to administrate and moderate online debates in order to deal with uncivil comments. Based on a platform analysis of Facebook’s comment moderation functions and interviews with eight political parties’ communication advisors, this study explored how political actors conduct comment moderation. The findings indicate that these actors acknowledge being responsible for moderating debates. Since turning off the comment section is impossible in Facebook, moderators can choose to delete or hide comments, and these arbiters tend to use the latter in order to avoid an escalation of conflicts. The hide function makes comments invisible to participants in the comment section, but the hidden texts remain visible to those who made the comment and their network. Thus, the users are unaware of being moderated. In this paper, we argue that hiding problematic speech without the users’ awareness has serious ramifications for public debates, and we examine the ethical challenges associated with the lack of transparency in comment sections and the way moderation is conducted in Facebook.
|
2020 |
Kalsnes, B. and Ihlebæk, K.A. |
View
Publisher
|