Journal Article |
Radicalization, the Internet and Cybersecurity: Opportunities and Challenges for HCI
View Abstract
The idea that the internet may enable an individual to become radicalized has been of increasing concern over the last two decades. Indeed, the internet provides individuals with an opportunity to access vast amounts of information and to connect to new people and new groups. Together, these prospects may create a compelling argument that radicalization via the internet is plausible. So, is this really the case? Can viewing ‘radicalizing’ material and interacting with others online actually cause someone to subsequently commit violent and/or extremist acts? In this article, we discuss the potential role of the internet in radicalization and relate to how cybersecurity and certain HCI ‘affordances’ may support it. We focus on how the design of systems provides opportunities for extremist messages to spread and gain credence, and how an application of HCI and user-centered understanding of online behavior and cybersecurity might be used to counter extremist messages. By drawing upon existing research that may be used to further understand and address internet radicalization, we discuss some future research directions and associated challenges.
|
2017 |
Hinds, J. and Joinson, A. |
View
Publisher
|
Journal Article |
A Longitudinal Measurement Study of 4chan’s Politically Incorrect Forum and its Effect on the Web
View Abstract
Although it has been a part of the dark underbelly of the Internet since its inception, recent events have brought the discussion board site 4chan to the forefront of the world’s collective mind. In particular, /pol/, 4chan’s “Politically Incorrect” board has become a central figure in the outlandish 2016 Presidential election. Even though 4chan has long been viewed as the “final boss of the Internet,” it remains relatively unstudied in the academic literature. In this paper we analyze /pol/ along several axes using a dataset of over 8M posts. We first perform a general characterization that reveals how active posters are, as well as how some unique features of 4chan affect the flow of discussion. We then analyze the content posted to /pol/ with a focus on determining topics of interest and types of media shared, as well as the usage of hate speech and differences in poster demographics. We additionally provide quantitative evidence of /pol/’s collective attacks on other social media platforms. We perform a quantitative case study of /pol/’s attempt to poison anti-trolling machine learning technology by altering the
language of hate on social media. Then, via analysis of comments from the 10s of thousands of YouTube videos linked on /pol/, we provide a mechanism for detecting attacks from /pol/ threads on 3rd party social media services.
|
2016 |
Hine, G.E., Onaolapo, J., De Cristofaro, E., Kourtellis, N., Leontadis, I., Samaras, R., Stringhini, G. and Blackburn, J. |
View
Publisher
|
Video |
Web as Weapon Internet as a Tool for Violent Radicalization and Homegrown Terrorism ( Part 2 of 2 )
View Abstract
Web as Weapon: Internet as a Tool for Violent Radicalization and Homegrown Terrorism (Part 2 of 2) Committee on Homeland Security, Subcommittee on Intelligence, Information Sharing, and Terrorism Risk Assessment. WITNESSES: Dr. Bruce Hoffman, Professor, Georgetown University; Ms. Rita Katz, Director, SITE Institute; Ms. Parry Aftab, Internet Attorney; Mr. Mark Weitzman, Director, Task Force Against Hate, Simon Wiesenthal Center. Video provided by U.S. House of Representatives. Discussion held on 071106. Originally uploaded by House.Resource.Org on 14 November 2011
|
2014 |
Hoffman, B. |
View
Publisher
|
Video |
Web as Weapon Internet as a Tool for Violent Radicalization and Homegrown Terrorism (Part 1 of 2)
View Abstract
Web as Weapon: Internet as a Tool for Violent Radicalization and Homegrown Terrorism (Part 1 of 2) Committee on Homeland Security, Subcommittee on Intelligence, Information Sharing, and Terrorism Risk Assessment. WITNESSES: Dr. Bruce Hoffman, Professor, Georgetown University; Ms. Rita Katz, Director, SITE Institute; Ms. Parry Aftab, Internet Attorney; Mr. Mark Weitzman, Director, Task Force Against Hate, Simon Wiesenthal Center. Video provided by U.S. House of Representatives. Discussion held on 071106. Originally uploaded by House.Resource.Org on 14 November 2011
|
2014 |
Hoffman, B. |
View
Publisher
|
Journal Article |
Analyzing Radical Visuals at Scale: How Far-Right Groups Mobilize on TikTok
View Abstract
Research examining radical visual communication and its manifestation on the trending platform TikTok is limited. This paper presents a novel methodological framework for studying mobilization strategies of far-right groups on TikTok, employing a mixed-method approach that combines manual annotation, unsupervised image classification, and named-entity recognition to analyze the dynamics of radical visuals at scale. Differentiating between internal and external mobilization, we use popularity and engagement cues to investigate far-right mobilization efforts on TikTok within and outside their community. Our findings shed light on the effectiveness of unsupervised image classification when utilized within a broader mixed-method framework, as each observed far-right group employs unique platform characteristics. While Conspiracists flourish in terms of overall popularity and internal mobilization, nationalist and protest content succeeds by using a variety of persuasive visual content to attract and engage external audiences. The study contributes to existing literature by bridging the gap between visual political communication at scale and radicalization research. By offering insights into mobilization strategies of far-right groups, our study provides a foundation for policymakers, researchers, and online platforms to develop proactive measures to address the risks associated with the dissemination of extremist ideologies on social media.
|
2023 |
Hohner, J., Kakavand, A. and Rothut, S. |
View
Publisher
|
Journal Article |
From solidarity to blame game: A computational approach to comparing far-right and general public Twitter discourse in the aftermath of the Hanau terror attack
View Abstract
Terror attacks are followed by public shock and disorientation. Previous research has found that people use social media to collectively negotiate responses, interpretations, and sense-making in the aftermath of terror attacks. However, the role of ideologically motivated discussions and their relevance to the overall discourse have not been studied. This paper ad-dresses this gap and focuses specifically on the far-right discourse, comparing it to the general public Twitter discourse following the terror attack in Hanau in 2020. A multi-method ap-proach combines network analysis and structural topic modelling to analyse 237,000 tweets. We find responsibility attribution to be one of the central themes: The general discourse pri-marily voiced sympathy with the victims and attributed responsibility for the attack to far-right terror or activism. In contrast, the far right – in an attempt to reshape the general narra-tive – raised a plethora of arguments to shift the attribution of responsibility from far-right activism towards the (political) elite and the personal circumstances of the shooter. In terms of information sharing and seeking, we demonstrate that new information was contextualised differentially depending on the ideological stance. The results are situated in the scientific dis-course concerning differences in social media communication ensuing terrorist attacks.
|
2022 |
Hohner, J., Schulze, H., Greipl, S. and Rieger, D. |
View
Publisher
|