Journal Article |
Terrorism and the Digital Right-Wing
View Abstract
Elizabeth T. Harwood on networks of provocation.
|
2019 |
Harwood, E.T. |
View
Publisher
|
Report |
Terrorism, Violent Extremism, and the Internet: Free Speech Considerations
View Abstract
Recent acts of terrorism and hate crimes have prompted a renewed focus on the possible links between internet content and offline violence. While some have focused on the role that social media companies play in moderating user-generated content, others have called for Congress to pass laws regulating online content promoting terrorism or violence. Proposals related to government action of this nature raise significant free speech questions, including (1) the reach of the First Amendment’s protections when it comes to foreign nationals posting online content from abroad; (2) the scope of so-called “unprotected” categories of speech developed long before the advent of the internet; and (3) the judicial standards that limit how the government can craft or enforce laws to preserve national security and prevent violence.
|
2019 |
Killion, V. L. |
View
Publisher
|
Journal Article |
Too Dark To See Explaining Adolescents Contact With Online Extremism And Their Ability To Recognize It
View Abstract
Adolescents are considered especially vulnerable to extremists’ online activities because they are ‘always online’ and because they are still in the process of identity formation. However, so far, we know little about (a) how often adolescents encounter extremist content in different online media and (b) how well they are able to recognize extremist messages. In addition, we do not know (c) how individual-level factors derived from radicalization research and (d) media and civic literacy affect extremist encounters and recognition abilities. We address these questions based on a representative face-to-face survey among German adolescents (n = 1,061) and qualitative interviews using a think-aloud method (n = 68). Results show that a large proportion of adolescents encounter extremist messages frequently, but that many others have trouble even identifying extremist content. In addition, factors known from radicalization research (e.g., deprivation, discrimination, specific attitudes) as well as extremism-related media and civic literacy influence the frequency of extremist encounters and recognition abilities.
|
2019 |
Nienierza, A., Reinemann, C., Fawzi, N., Riesmeyer, C. and Neumann, K. |
View
Publisher
|
Journal Article |
Antisemitism on Twitter: Collective efficacy and the role of community organisations in challenging online hate speech
View Abstract
In this paper, we conduct a comprehensive study of online antagonistic content related to Jewish identity posted on Twitter between October 2015 and October 2016 by UK-based users. We trained a scalable supervised machine learning classifier to identify antisemitic content to reveal patterns of online antisemitism perpetration at the source. We built statistical models to analyse the inhibiting and enabling factors of the size (number of retweets) and survival (duration of retweets) of information flows in addition to the production of online antagonistic content. Despite observing high temporal variability, we found that only a small proportion (0.7%) of the content was antagonistic. We also found that antagonistic content was less likely to disseminate in size or survive fora longer period. Information flows from antisemitic agents on Twitter gained less traction, while information flows emanating from capable and willing counter-speech actors -i.e. Jewish organisations- had a significantly higher size and survival rates. This study is the first to demonstrate that Sampson’s classic sociological concept of collective efficacy can be observed on social media (SM). Our findings suggest that when organisations aiming to counter harmful narratives become active on SM platforms, their messages propagate further and achieve greater longevity than antagonistic messages. On SM, counter-speech posted by credible, capable and willing actors can be an effective measure to prevent harmful narratives. Based on our findings, we underline the value of the work by community organisations in reducing the propagation of cyberhate and increasing trust in SM platforms.
|
2019 |
Ozalp, A.S., Williams, M.L., Burnap, P., Liu, H. and Mostafa, M. |
View
Publisher
|
Journal Article |
Combating Violent Extremism Voices Of Former Right Wing Extremists
View Abstract
While it has become increasingly common for researchers, practitioners and policymakers to draw from the insights of former extremists to combat violent extremism, overlooked in this evolving space has been an in-depth look at how formers perceive such efforts. To address this gap, interviews were conducted with 10 Canadian former right-wing extremists based on a series of questions provided by 30 Canadian law enforcement officials and 10 community activists. Overall, formers suggest that combating violent extremism requires a multidimensional response, largely consisting of support from parents and families, teachers and educators, law enforcement officials, and other credible formers.
|
2019 |
Scrivens, R., Venkatesh, V., Bérubé, M. and Gaudette, T. |
View
Publisher
|
Journal Article |
Deep Context-Aware Embedding for Abusive and Hate Speech Detection on Twitter
View Abstract
Violence usually spread online, as it has spread in the past. With the increasing use of social media, the violence attributed to online hate speech has increased worldwide resulting rise in number of attacks on immigrants and other minorities. Analysis of such short text posts (e.g. tweets etc.) is valuable for identification of abusive language and hate speech. In this paper, we present Deep Context-Aware Embedding for the detection of Hate speech and abusive language on twitter. To improve the classification performance, we have enhanced the quality of the tweets by considering polsemy, syntax, semantic, OOV words as well as sentiment knowledge and concatenated to form input vector. We have used BiLSTM with attention modeling to identify tweet with hate speech. Experimental results showed significant improvement in the classification of tweets.
|
2019 |
Naseem, U., Razzak, I. and Hameed, I. A.
|
View
Publisher
|