Journal |
Histories of Hating
View Abstract
This roundtable discussion presents a dialogue between digital culture scholars on the seemingly increased presence of hating and hate speech online. Revolving primarily around the recent #GamerGate campaign of intensely misogynistic discourse aimed at women in video games, the discussion suggests that the current moment for hate online needs to be situated historically. From the perspective of intersecting cultural histories of hate speech, discrimination, and networked communication, we interrogate the ontological specificity of online hating before going on to explore potential responses to the harmful consequences of hateful speech. Finally, a research agenda for furthering the historical understandings of
contemporary online hating is suggested in order to address the urgent need for scholarly interventions into the exclusionary cultures of networked media.
|
2015 |
Shepherd, T. and Harvey, A. |
View
Publisher
|
MA Thesis |
Historical Events And Supply Chain Disruption: chemical, biological, radiological and cyber events
View Abstract
In the wake of the attacks of September 11, 2001, terrorism emerged as a legitimate threat not just to society, but to corporations as well. This new threat has challenged old business rules and prompted companies to rethink their supply chain operations. However, the events of September 11th were not the first or the only disruptions that the business world had experienced. This thesis reviews past historical events that simulate the effects of a terrorist attack and extracts lessons that can be applied by today’s corporations to prepare for future attacks or disruptions. The types of events studied include Biological, Chemical, Radiological and Cyber disruptions. Through the analysis and synthesis of each event’s impact, the following generalized recommendations emerged: Prior warnings and events should be acknowledged, studied and utilized. Government intervention may strain operations under disruptive stress. Alternate sourcing should be considered to ease supply issues. Disruptions should be approached in a comprehensive and forthright manner. A security and safety culture should be fostered to prevent disruptions and control their spread. Systems should be prepared to quickly operate in isolation during a disruption. Finally, impact is frequently less severe then initially predicted. Through the events described and these recommendations, this thesis aims to provide lessons for firms to manage their supply chains through future disruptions.
|
2003 |
Lensing, R. P. |
View
Publisher
|
Journal Article |
Hierarchical CVAE for Fine-Grained Hate Speech Classification
View Abstract
Existing work on automated hate speech detection typically focuses on binary classification or on differentiating among a small set of categories. In this paper, we propose a novel method on a fine-grained hate speech classification task, which focuses on differentiating among 40 hate groups of 13 different hate group categories. We first explore the Conditional Variational Autoencoder (CVAE) as a discriminative model and then extend it to a hierarchical architecture to utilize the additional hate category information for more accurate prediction. Experimentally, we show that incorporating the hate category information for training can significantly improve the classification performance and our proposed model outperforms commonly-used discriminative models.
|
2018 |
Qian, J., ElSherief, M., Belding, E. and Wang, W.Y. |
View
Publisher
|
Journal Article |
Hiding hate speech: political moderation on Facebook
View Abstract
Facebook facilitates more extensive dialogue between citizens and politicians. However, communicating via Facebook has also put pressure on political actors to administrate and moderate online debates in order to deal with uncivil comments. Based on a platform analysis of Facebook’s comment moderation functions and interviews with eight political parties’ communication advisors, this study explored how political actors conduct comment moderation. The findings indicate that these actors acknowledge being responsible for moderating debates. Since turning off the comment section is impossible in Facebook, moderators can choose to delete or hide comments, and these arbiters tend to use the latter in order to avoid an escalation of conflicts. The hide function makes comments invisible to participants in the comment section, but the hidden texts remain visible to those who made the comment and their network. Thus, the users are unaware of being moderated. In this paper, we argue that hiding problematic speech without the users’ awareness has serious ramifications for public debates, and we examine the ethical challenges associated with the lack of transparency in comment sections and the way moderation is conducted in Facebook.
|
2020 |
Kalsnes, B. and Ihlebæk, K.A. |
View
Publisher
|
Letter |
Hidden Resilience And Adaptive Dynamics Of The Global Online Hate Ecology
View Abstract
Online hate and extremist narratives have been linked to abhorrent real-world events, including a current surge in hate crimes and an alarming increase in youth suicides that result from social media vitriol; inciting mass shootings such as the 2019 attack in Christchurch, stabbings and bombings; recruitment of extremists, including entrapment and sex-trafficking of girls as fighter brides; threats against public figures, including the 2019 verbal attack against an anti-Brexit politician, and hybrid (racist–anti-women–anti-immigrant) hate threats against a US member of the British royal family; and renewed anti-western hate in the 2019 post-ISIS landscape associated with support for Osama Bin Laden’s son and Al Qaeda. Social media platforms seem to be losing the battle against online hate and urgently need new insights. Here we show that the key to understanding the resilience of online hate lies in its global network-of-network dynamics. Interconnected hate clusters form global ‘hate highways’ that—assisted by collective online adaptations—cross social media platforms, sometimes using ‘back doors’ even after being banned, as well as jumping between countries, continents and languages. Our mathematical model predicts that policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish. We observe the current hate network rapidly rewiring and self-repairing at the micro level when attacked, in a way that mimics the formation of covalent bonds in chemistry. This understanding enables us to propose a policy matrix that can help to defeat online hate, classified by the preferred (or legally allowed) granularity of the intervention and top-down versus bottom-up nature. We provide quantitative assessments for the effects of each intervention. This policy matrix also offers a tool for tackling a broader class of illicit online behaviours such as financial fraud.
|
2019 |
Johnson, N. F., Leahy, R., Johnson Restrepo, N., Velasquez, N., Zheng, M., Manrique, P., Devkota, P. and Wuchty, S. |
View
Publisher
|
Journal Article |
Hidden order across online extremist movements can be disrupted by nudging collective chemistry
View Abstract
Disrupting the emergence and evolution of potentially violent online extremist movements is a crucial challenge. Extremism research has analyzed such movements in detail, focusing on individual- and movement-level characteristics. But are there system-level commonalities in the ways these movements emerge and grow? Here we compare the growth of the Boogaloos, a new and increasingly prominent U.S. extremist movement, to the growth of online support for ISIS, a militant, terrorist organization based in the Middle East that follows a radical version of Islam. We show that the early dynamics of these two online movements follow the same mathematical order despite their stark ideological, geographical, and cultural differences. The evolution of both movements, across scales, follows a single shockwave equation that accounts for heterogeneity in online interactions. These scientific properties suggest specific policies to address online extremism and radicalization. We show how actions by social media platforms could disrupt the onset and ‘flatten the curve’ of such online extremism by nudging its collective chemistry. Our results provide a system-level understanding of the emergence of extremist movements that yields fresh insight into their evolution and possible interventions to limit their growth.
|
2021 |
Velásquez, N., Manrique, P., Sear, R., Leahy, R., Restrepo, N.J., Illari, L., Lupu, Y. and Johnson, N.F. |
View
Publisher
|