Journal Article |
Echo Chambers Exist! (But They’re Full of Opposing Views)
View Abstract
The theory of echo chambers, which suggests that online political discussions take place in conditions of ideological homogeneity, has recently gained popularity as an explanation for patterns of political polarization and radicalization observed in many democratic countries. However, while micro-level experimental work has shown evidence that individuals may gravitate towards information that supports their beliefs, recent macro-level studies have cast doubt on whether this tendency generates echo chambers in practice, instead suggesting that cross-cutting exposures are a common feature of digital life. In this article, we offer an explanation for these diverging results. Building on cognitive dissonance theory, and making use of observational trace data taken from an online white nationalist website, we explore how individuals in an ideological ‘echo chamber’ engage with opposing viewpoints. We show that this type of exposure, far from being detrimental to radical online discussions, is actually a core feature of such spaces that encourages people to stay engaged. The most common ‘echoes’ in this echo chamber are in fact the sound of opposing viewpoints being undermined and marginalized. Hence echo chambers exist not only in spite of but thanks to the unifying presence of oppositional viewpoints. We conclude with reflections on policy implications of our study for those seeking to promote a more moderate political internet.
|
2020 |
Bright, J., Marchal, N., Ganesh, B. and Rudinac, S.
|
View
Publisher
|
Journal Article |
The temporal evolution of a far-right forum
View Abstract
The increased threat of right-wing extremist violence necessitates a better understanding of online extremism. Radical message boards, small-scale social media platforms, and other internet fringes have been reported to fuel hatred. The current paper examines data from the right-wing forum Stormfront between 2001 and 2015. We specifically aim to understand the development of user activity and the use of extremist language. Various time-series models depict posting frequency and the prevalence and intensity of extremist language. Individual user analyses examine whether some super users dominate the forum. The results suggest that structural break models capture the forum evolution better than stationary or linear change models. We observed an increase of forum engagement followed by a decrease towards the end of the time range. However, the proportion of extremist language on the forum increased in a step-wise matter until the early summer of 2011, followed by a decrease. This temporal development suggests that forum rhetoric did not necessarily become more extreme over time. Individual user analysis revealed that super forum users accounted for the vast majority of posts and of extremist language. These users differed from normal users in their evolution of forum engagement.
|
2020 |
Kleinberg, B., van der Vegt, I. and Gill, P. |
View
Publisher
|
Journal Article |
Quarantining Online Hate Speech: Technical and Ethical Perspectives
View Abstract
In this paper we explore quarantining as a more ethical method for delimiting the spread of Hate Speech via online social media platforms. Currently, companies like Facebook, Twitter, and Google generally respond reactively to such material: offensive messages that have already been posted are reviewed by human moderators if complaints from users are received. The offensive posts are only subsequently removed if the complaints are upheld; therefore, they still cause the recipients psychological harm. In addition, this approach has frequently been criticised for delimiting freedom of expression, since it requires the service providers to elaborate and implement censorship regimes. In the last few years, an emerging generation of automatic Hate Speech detection systems has started to offer new strategies for dealing with this particular kind of offensive online material. Anticipating the future efficacy of such systems, the present article advocates an approach to online Hate Speech detection that is analogous to the quarantining of malicious computer software. If a given post is automatically classified as being harmful in a reliable manner, then it can be temporarily quarantined, and the direct recipients can receive an alert, which protects them from the harmful content in the first instance. The quarantining framework is an example of more ethical online safety technology that can be extended to the handling of Hate Speech. Crucially, it provides flexible options for obtaining a more justifiable balance between freedom of expression and appropriate censorship.
|
2020 |
Ullmann, S. and Tomalin, M. |
View
Publisher
|
Journal Article |
Mapping the online presence and activities of the Islamic State’s unofficial propaganda cell: Ahlut-Tawhid Publications
View Abstract
This paper, which takes the form of a case study, aims to contribute to the debate on activities of the Islamic State’s unofficial media bureaus. Based on tools of open source intelligence, as well as a limited content analysis, it maps the online presence and activities of Ahlut-Tawhid Publications (AHP). Its means of distributing pro-Daesh content in the surface web as well as its general impact are discussed. It also deliberates on the interconnectedness of AHP with other online propaganda cells supporting the self-proclaimed “Caliphate.” This paper argues that this group was part of the ongoing online campaign of the Islamic State in the World Wide Web in 2018 and 2019. It maintained quite an impressive and long-lasting online presence, combining the potential of the most popular microblogs, hosting services and social media with the flexibility of standalone websites. In contrast to the most recognized propaganda cells of Daesh, such as al-Hayat Media Centre or Amaq News Agency whose productions have been quickly detected and removed from the mainstream webpages for years, AHP kept a low profile for the most part of 2018. In effect, it benefited from its relative anonymity and for months operated a network of pro-IS distribution channels throughout Web 1.0 and Web 2.0 environments. This ceased to be the case in 2019, when most of them were incapacitated (banned) by law enforcement or abandoned. It is clear that the attention given to proliferating propaganda through the surface web decreased at this time, probably in favor of the Telegram communication software, as the discovered statistics suggest. The only active (still updated) locations—partially related to Ahlut-Tawhid Publications—belonged to the Bengali Ansar network. It has to be stressed, however, that AHP failed to spark increased attention of Internet users.
|
2020 |
Lakomy, M. |
View
Publisher
|
Journal Article |
Framing War: Visual Propaganda, the Islamic State, and the Battle for East Mosul
View Abstract
This article explores how propaganda can be used to construct counter-factual visual narratives at times of war. Specifically, it examines how the Islamic State communicated its way through the 100-day-long battle for east Mosul, which was launched by the coalition and its allies in October 2016. Drawing on Jacques Ellul’s 1962 theory of propaganda, it uses qualitative content analysis to decipher the 1,261 media products published online by the group during the first phase of its defence of the city. The author contends that, even though it was resoundingly defeated there by January, the global legacy of this battle, which was used as a testing ground for a series of potent innovations in insurgent strategic communication, will endure long into the future.
|
2020 |
Winter, C. |
View
Publisher
|
Report |
Tackling Insurgent Ideologies 2.0: Rapporteurs’ Report
View Abstract
As the global political barometer increasingly shifts towards insularity, protectionism and propaganda-driven populism across countries, the CVE community is faced with a varied set of challenges. Whether it is on the question of dealing with returning ISIS FTFs, and preventing their move to different geographical theatres; or combatting majoritarian groups that rally around grievances, race or religion and fuel extreme violence—we need to ask ourselves how much more vulnerable we are today, and identify where the fault lines lie. While addressing these challenges, it is equally necessary to ensure that the protection of human rights and fundamental freedoms are balanced as governments address security priorities. It is with the desire to see more global conversation on the manifold ideologies that drive violence and the responsibility of governments, platforms and civil society engaged in CVE initiatives that the Observer Research Foundation (ORF) organised the second iteration of Tackling Insurgent Ideologies, with the theme “Implementing the Christchurch Call: Towards a Global CVE Agenda.” We brought together a diverse group of policymakers, researchers and practitioners involved in the process of developing strategies that deal with the proliferation of radicalism and violence to debate and discuss best practices, learnings and a way forward.
|
2020 |
Observer Research Foundation |
View
Publisher
|