VOX-Pol Blog |
Tommy Robinson and the UK’s Post-EDL Far Right: How Extremists are Mobilising in Response to Online Restrictions and Developing a New ‘Victimisation’ Narrative
View Abstract
|
2018 |
Allchorn, W. |
View
Publisher
|
Journal Article |
Too civil to care? How online hate speech against different social groups affects bystander intervention
View Abstract
A large share of online users has already witnessed online hate speech. Because targets tend to interpret such bystanders’ lack of reaction as agreement with the hate speech, bystander intervention in online hate speech is crucial as it can help alleviate negative consequences. Despite evidence regarding online bystander intervention, however, whether bystanders evaluate online hate speech targeting different social groups as equally uncivil and, thereby, equally worthy of intervention remains largely unclear. Thus, we conducted an online experiment systematically varying the type of online hate speech as homophobia, racism, and misogyny. The results demonstrate that, although all three forms were perceived as uncivil, homophobic hate speech was perceived to be less uncivil than hate speech against women. Consequently, misogynist hate speech, compared to homophobic hate speech, increased feelings of personal responsibility and, in turn, boosted willingness to confront.
|
2023 |
Obermaier, M., Schmid, U.K. and Rieger, D. |
View
Publisher
|
Journal Article |
Too Dark To See Explaining Adolescents Contact With Online Extremism And Their Ability To Recognize It
View Abstract
Adolescents are considered especially vulnerable to extremists’ online activities because they are ‘always online’ and because they are still in the process of identity formation. However, so far, we know little about (a) how often adolescents encounter extremist content in different online media and (b) how well they are able to recognize extremist messages. In addition, we do not know (c) how individual-level factors derived from radicalization research and (d) media and civic literacy affect extremist encounters and recognition abilities. We address these questions based on a representative face-to-face survey among German adolescents (n = 1,061) and qualitative interviews using a think-aloud method (n = 68). Results show that a large proportion of adolescents encounter extremist messages frequently, but that many others have trouble even identifying extremist content. In addition, factors known from radicalization research (e.g., deprivation, discrimination, specific attitudes) as well as extremism-related media and civic literacy influence the frequency of extremist encounters and recognition abilities.
|
2019 |
Nienierza, A., Reinemann, C., Fawzi, N., Riesmeyer, C. and Neumann, K. |
View
Publisher
|
Journal Article |
Topic-Specific YouTube Crawling to Detect Online Radicalization
View Abstract
Online video sharing platforms such as YouTube contains several videos and users promoting hate and extremism. Due to low barrier to publication and anonymity, YouTube is misused as a platform by some users and communities to post negative videos disseminating hatred against a particular religion, country or person. We formulate the problem of identification of such malicious videos as a search problem and present a focused-crawler based approach consisting of various components performing several tasks: search strategy or algorithm, node similarity computation metric, learning from exemplary profiles serving as training data, stopping criterion, node classifier and queue manager. We implement two versions of the focused crawler: best-first search and shark search. We conduct a series of experiments by varying the seed, number of n-grams in the language model based comparer, similarity threshold for the classifier and present the results of the experiments using standard Information Retrieval metrics such as precision, recall and F-measure. The accuracy of the proposed solution on the sample dataset is 69% and 74% for the best-first and shark search respectively. We perform characterization study (by manual and visual inspection) of the anti-India hate and extremism promoting videos retrieved by the focused crawler based on terms present in the title of the videos, YouTube category, average length of videos, content focus and target audience. We present the result of applying Social Network Analysis based measures to extract communities and identify core and influential users.
|
2015 |
Agarwal, S. and Sureka, A. |
View
Publisher
|
Journal Article |
Topological Data Mapping of Online Hate Speech, Misinformation, and General Mental Health: A Large Language Model Based Study
View Abstract
The advent of social media has led to an increased concern over its potential to propagate hate speech and misinformation, which, in addition to contributing to prejudice and discrimination, has been suspected of playing a role in increasing social violence and crimes in the United States. While literature has shown the existence of an association between posting hate speech and misinformation online and certain personality traits of posters, the general relationship and relevance of online hate speech/misinformation in the context of overall psychological wellbeing of posters remain elusive. One difficulty lies in the lack of adequate data analytics tools capable of adequately analyzing the massive amount of social media posts to uncover the underlying hidden links. Recent progresses in machine learning and large language models such as ChatGPT have made such an analysis possible. In this study, we collected thousands of posts from carefully selected communities on the social media site Reddit. We then utilized OpenAI’s GPT3 to derive embeddings of these posts, which are high-dimensional real-numbered vectors that presumably represent the hidden semantics of posts. We then performed various machine-learning classifications based on these embeddings in order to understand the role of hate speech/misinformation in various communities. Finally, a topological data analysis (TDA) was applied to the embeddings to obtain a visual map connecting online hate speech, misinformation, various psychiatric disorders, and general mental health.
|
2023 |
Alexander, A. and Wang, H. |
View
Publisher
|
Report |
Topologies and Tribulations of Gettr
View Abstract
On July 1, 2021, a new social network modeled after Twitter was launched by former Trump spokesman Jason Miller, with assistance and promotion by exiled Chinese businessman Miles Guo, form Trump strategist Steve Bannon, and others. Today, the Stanford Internet Observatory is releasing the first comprehensive analysis of the new platform. We chart the growth of Gettr over its first month, examining the user community, content, structure and dynamics. We also highlight some of the perils of launching such a network without trust and safety measures in place: the proliferation of gratuitous adult content, spam and, unfortunately, child exploitation imagery, all of which could be caught by cursory automated scanning systems.
|
2021 |
Thiel, D. and McCain, M.
|
View
Publisher
|