Journal Article |
r/WatchRedditDie and the politics of reddit’s bans and quarantines
View Abstract
The subreddit r/WatchRedditDie was founded in 2015 after reddit started implementing anti-harassment policies, and positions itself as a “fire alarm for reddit” meant to voyeuristically watch reddit’s impending (symbolic) death. As conversations around platform governance, moderation, and the role of platforms in controlling hate speech become more complex, r/WatchRedditDie and its affiliated subreddits are dedicated in maintaining a version of reddit tolerant of any and all speech, excluding other more vulnerable users from fully participating on the platform. r/WatchReditDie users advocate for no interference in their activities on the platform—meaning that although they rely on the reddit infrastructure to sustain their community, they aim to self-govern to uphold a libertarian and often manipulated interpretation of free expression. Responding to reddit’s evolving policies, they find community with one another by positioning the platform itself as their main antagonist. Through the social worlds framework, I examine the r/WatchRedditDie community’s responses to platform change, bringing up new questions about the possibility of shared governance between platform and user, as well as participatory culture’s promises and perils.
|
2021 |
DeCook, J.R. |
View
Publisher
|
Journal Article |
Temporal Behavioural Analysis of Extremists on Social Media: A Machine Learning Based Approach
View Abstract
Public opinion is of critical importance to businesses and governments. It represents the collective opinion and prevalent views about a certain topic, policy, or issue. Extreme public opinion consists of extreme views held by individuals that advocate and spread radical ideas for the purpose of radicalizing others. while the proliferation of social media gives unprecedented reach and visibility and a platform for freely expressing public opinion, social media fora can also be used for spreading extreme views, manipulating public opinions, and radicalizing others. In this work, we leverage data mining and analytics techniques to study extreme public opinion expressed using social medial. A dataset of 259,904 tweets posted between 21/02/2016 and 01/05/2021 was collected in relation to extreme nationalism, hate speech, and supremacy. The collected data was analyzed using a variety to techniques, including sentiment analysis, named entity recognition, social circle analysis, and opinion leaders’ identification, and results related to an American politician and an American right-wing activist were presented. The results obtained are very promising and open the door to the ability to monitor the evolution of extreme views and public opinion online.
|
2021 |
Lutfi, S., Yasin, R., El Barachi, M., Oroumchian, F., Imene, A. and Mathew, S.S. |
View
Publisher
|
Journal Article |
Mechanisms of online radicalisation: how the internet affects the radicalisation of extreme-right lone actor terrorists
View Abstract
How does the internet affect the radicalisation of extreme-right lone actor terrorists? In the absence of an established theoretical model, this article identifies six mechanisms seen as particularly relevant for explaining online radicalisation. Having first reviewed a larger set of relevant lone actor terrorists, the study traces these mechanisms in three selected cases where the internet was reportedly used extensively during radicalisation. The findings show that the internet primarily facilitated radicalisation through information provision, as well as amplifying group polarisation and legitimising extreme ideology and violence through echoing. In all three cases, radicalisation was also affected considerably by offline push-factors that through their presence made extreme online messages more impactful. The results challenge the view that offline interaction is necessary for radicalisation to occur but also the view that online influence itself is sufficient.
|
2021 |
Mølmen, G.N. and Ravndal, J.A. |
View
Publisher
|
Journal Article |
Short of Suspension: How Suspension Warnings Can Reduce Hate Speech on Twitter
View Abstract
Debates around the effectiveness of high-profile Twitter account suspensions and similar bans on abusive users across social media platforms abound. Yet we know little about the effectiveness of warning a user about the possibility of suspending their account as opposed to outright suspensions in reducing hate speech. With a pre-registered experiment, we provide causal evidence that a warning message can reduce the use of hateful language on Twitter, at least in the short term. We design our messages based on the literature on deterrence, and test versions that emphasize the legitimacy of the sender, the credibility of the message, and the costliness of being suspended. We find that the act of warning a user of the potential consequences of their behavior can significantly reduce their hateful language for one week. We also find that warning messages that aim to appear legitimate in the eyes of the target user seem to be the most effective. In light of these findings, we consider the policy implications of platforms adopting a more aggressive approach to warning users that their accounts may be suspended as a tool for reducing hateful speech online.
|
2021 |
Yildirim, M.M., Nagler, J., Bonneau, R. and Tucker, J.A. |
View
Publisher
|
Report |
Transnational Terrorism and the Internet
View Abstract
Does the internet enable the recruitment of transnational terrorists? Using geo-referenced population census data and personnel records from the Islamic State in Iraq and the Levant—a highly tech-savvy terrorist organization— this paper shows that internet access has facilitated the organization’s recruitment of foreign fighters from Tunisia. The positive association between internet access and Daesh recruitment is robust to controlling for a large set of observable and unobservable confounders as well as instrumenting internet access rates with the incidence of lightning strikes.
|
2021 |
Do, Q-T., Gomez-Parra, N. and Rijkers, B. |
View
Publisher
|
Journal Article |
Pretending to be States: The Use of Facebook by Armed Groups in Myanmar
View Abstract
Which functions do social media fill for non-state armed groups in countries with internal armed conflict? Building on conflict data, interviews and media monitoring, we have reviewed the use of social media by Myanmar’s nine most powerful armed groups. The first finding is that they act like states, using social media primarily to communicate with their constituents. Second, they also use social media as a tool of armed struggle, for command and control, intelligence, denunciation of traitors, and attacks against adversaries. Third, social media serves for national and international outreach. Like Myanmar’s national army, the armed groups have combined prudent official pages with an underworld of more reckless profiles and closed groups that often breach Facebook’s official community standards. In February 2019, when Facebook excluded four groups from its platform, they lost much of their ability to reach out and act like states. Yet they kept a capacity to communicate with their constituents through closed groups, individual profiles and sophisticated use of links and shares. Finally, the article affirms that the Facebook company, in the years 2018–2020,took upon itself a role as an arbiter within Myanmar’s internal conflicts, deciding what information was allowed and disallowed.
|
2021 |
Tønnesson, S., Zaw Oo, M. and Aung, N.L. |
View
Publisher
|