Journal Article |
Echo Chambers Exist! (But They’re Full of Opposing Views)
View Abstract
The theory of echo chambers, which suggests that online political discussions take place in conditions of ideological homogeneity, has recently gained popularity as an explanation for patterns of political polarization and radicalization observed in many democratic countries. However, while micro-level experimental work has shown evidence that individuals may gravitate towards information that supports their beliefs, recent macro-level studies have cast doubt on whether this tendency generates echo chambers in practice, instead suggesting that cross-cutting exposures are a common feature of digital life. In this article, we offer an explanation for these diverging results. Building on cognitive dissonance theory, and making use of observational trace data taken from an online white nationalist website, we explore how individuals in an ideological ‘echo chamber’ engage with opposing viewpoints. We show that this type of exposure, far from being detrimental to radical online discussions, is actually a core feature of such spaces that encourages people to stay engaged. The most common ‘echoes’ in this echo chamber are in fact the sound of opposing viewpoints being undermined and marginalized. Hence echo chambers exist not only in spite of but thanks to the unifying presence of oppositional viewpoints. We conclude with reflections on policy implications of our study for those seeking to promote a more moderate political internet.
|
2020 |
Bright, J., Marchal, N., Ganesh, B. and Rudinac, S.
|
View
Publisher
|
Journal Article |
Hate Speech and Covert Discrimination on Social Media: Monitoring the Facebook Pages of Extreme-Right Political Parties in Spain
View Abstract
This study considers the ways that overt hate speech and covert discriminatory practices circulate on Facebook despite its official policy that prohibits hate speech. We argue that hate speech and discriminatory practices are not only explained by users’ motivations and actions, but are also formed by a network of ties between the platform’s policy, its technological affordances, and the communicative acts of its users. Our argument is supported with longitudinal multimodal content and network analyses of data extracted from official Facebook pages of seven extreme-right political parties in Spain between 2009 and 2013. We found that the Spanish extreme-right political parties primarily implicate discrimination, which is then taken up by their followers who use overt hate speech in the comment space.
|
2016 |
Ben-David, A. and Matamoros Fernández, A. |
View
Publisher
|
Journal Article |
The temporal evolution of a far-right forum
View Abstract
The increased threat of right-wing extremist violence necessitates a better understanding of online extremism. Radical message boards, small-scale social media platforms, and other internet fringes have been reported to fuel hatred. The current paper examines data from the right-wing forum Stormfront between 2001 and 2015. We specifically aim to understand the development of user activity and the use of extremist language. Various time-series models depict posting frequency and the prevalence and intensity of extremist language. Individual user analyses examine whether some super users dominate the forum. The results suggest that structural break models capture the forum evolution better than stationary or linear change models. We observed an increase of forum engagement followed by a decrease towards the end of the time range. However, the proportion of extremist language on the forum increased in a step-wise matter until the early summer of 2011, followed by a decrease. This temporal development suggests that forum rhetoric did not necessarily become more extreme over time. Individual user analysis revealed that super forum users accounted for the vast majority of posts and of extremist language. These users differed from normal users in their evolution of forum engagement.
|
2020 |
Kleinberg, B., van der Vegt, I. and Gill, P. |
View
Publisher
|
Journal Article |
Combating Violent Extremism Voices Of Former Right Wing Extremists
View Abstract
While it has become increasingly common for researchers, practitioners and policymakers to draw from the insights of former extremists to combat violent extremism, overlooked in this evolving space has been an in-depth look at how formers perceive such efforts. To address this gap, interviews were conducted with 10 Canadian former right-wing extremists based on a series of questions provided by 30 Canadian law enforcement officials and 10 community activists. Overall, formers suggest that combating violent extremism requires a multidimensional response, largely consisting of support from parents and families, teachers and educators, law enforcement officials, and other credible formers.
|
2019 |
Scrivens, R., Venkatesh, V., Bérubé, M. and Gaudette, T. |
View
Publisher
|
Journal Article |
Quarantining Online Hate Speech: Technical and Ethical Perspectives
View Abstract
In this paper we explore quarantining as a more ethical method for delimiting the spread of Hate Speech via online social media platforms. Currently, companies like Facebook, Twitter, and Google generally respond reactively to such material: offensive messages that have already been posted are reviewed by human moderators if complaints from users are received. The offensive posts are only subsequently removed if the complaints are upheld; therefore, they still cause the recipients psychological harm. In addition, this approach has frequently been criticised for delimiting freedom of expression, since it requires the service providers to elaborate and implement censorship regimes. In the last few years, an emerging generation of automatic Hate Speech detection systems has started to offer new strategies for dealing with this particular kind of offensive online material. Anticipating the future efficacy of such systems, the present article advocates an approach to online Hate Speech detection that is analogous to the quarantining of malicious computer software. If a given post is automatically classified as being harmful in a reliable manner, then it can be temporarily quarantined, and the direct recipients can receive an alert, which protects them from the harmful content in the first instance. The quarantining framework is an example of more ethical online safety technology that can be extended to the handling of Hate Speech. Crucially, it provides flexible options for obtaining a more justifiable balance between freedom of expression and appropriate censorship.
|
2020 |
Ullmann, S. and Tomalin, M. |
View
Publisher
|
Journal Article |
Mapping the online presence and activities of the Islamic State’s unofficial propaganda cell: Ahlut-Tawhid Publications
View Abstract
This paper, which takes the form of a case study, aims to contribute to the debate on activities of the Islamic State’s unofficial media bureaus. Based on tools of open source intelligence, as well as a limited content analysis, it maps the online presence and activities of Ahlut-Tawhid Publications (AHP). Its means of distributing pro-Daesh content in the surface web as well as its general impact are discussed. It also deliberates on the interconnectedness of AHP with other online propaganda cells supporting the self-proclaimed “Caliphate.” This paper argues that this group was part of the ongoing online campaign of the Islamic State in the World Wide Web in 2018 and 2019. It maintained quite an impressive and long-lasting online presence, combining the potential of the most popular microblogs, hosting services and social media with the flexibility of standalone websites. In contrast to the most recognized propaganda cells of Daesh, such as al-Hayat Media Centre or Amaq News Agency whose productions have been quickly detected and removed from the mainstream webpages for years, AHP kept a low profile for the most part of 2018. In effect, it benefited from its relative anonymity and for months operated a network of pro-IS distribution channels throughout Web 1.0 and Web 2.0 environments. This ceased to be the case in 2019, when most of them were incapacitated (banned) by law enforcement or abandoned. It is clear that the attention given to proliferating propaganda through the surface web decreased at this time, probably in favor of the Telegram communication software, as the discovered statistics suggest. The only active (still updated) locations—partially related to Ahlut-Tawhid Publications—belonged to the Bengali Ansar network. It has to be stressed, however, that AHP failed to spark increased attention of Internet users.
|
2020 |
Lakomy, M. |
View
Publisher
|