Journal Article |
Exploring issues of online hate speech against minority religious groups in Bangladesh
View Abstract
Purpose
Online hate speech (OHS) is becoming a serious problem worldwide including in Bangladesh. This study aims to understand the nature of OHS against religious groups and explore its impact on their social life and mental health.
Design/methodology/approach
A qualitative approach was followed and 11 in-depth interviews (IDIs) were conducted with the selected OHS victims. This study conducted a semi-structural interview using Google Form following the design questionnaire for selecting IDIs participants.
Findings
This study found that religious minorities experience online hatred through online media by the major religious group in Bangladesh. Natures of OHS are commenting on social media posts, sharing hateful memes and sending private messages using slang language targeting religious identity, religious food habits and ethnic identities. Victims were offended, abused and bullied by unknown persons, their university friends and colleagues. Victims of OHS did not take any action against it due to fear of insecurity. Victims of OHS felt low-minded, helpless and anxious after the experience of OHS; they felt more insecure and vulnerable socially and mentally.
Originality/value
The findings of this study suggest that policymakers identify the nature of OHS and take proper steps for reducing the frequency of OHS in Bangladesh. To combat the OHS, authorities have to make legal enforcement equal for everyone.
|
2023 |
Rezvi, M.R. and Hossain, M.R. |
View
Publisher
|
PhD Thesis |
Iraqi Insurgents’ Use Of Youtube As A Strategic Communication Tool: An Exploratory Content Analysis
View Abstract
This dissertation study is a baseline investigation into Iraqi insurgents’ use of YouTube as a strategic communication tool. The study utilized a content analysis of videos from October 28, 2008, to December 1, 2008, for the search term ‘Iraqi resistance’ on YouTube that met stated criteria. Overall framing devices and themes found in the collection of videos were examined. While not a random sample, the collection of videos was selected as a representation of the overall population of Iraqi insurgent videos for the time frame examined. Along with a more open interpretation of frames, the study examined those which may be used to recruit and/or send anti-U.S. sentiment. It builds upon previous research in related areas and applies theory with a focus on Social Identity, Diffusion of Innovation, Cultivation, and Framing in an attempt to explore the phenomenon. The methodological design establishes a baseline for future comparison and study since the topic of Iraqi insurgents’ use of YouTube has yet to be examined extensively in the academic arena. Overall, there were 54 videos that met set criteria examined for this study. Of these, most were documentary attacks. While there were 28 Iraqi insurgent groups represented in the videos, only 4 Iraqi insurgent groups were identified in five or more videos. These were Islamic State of Iraq (25.9%, n=14), Iraqi Resistance (24.2%, n=13), Ansar al-Islam (18.5%, n=10), and Jaish al-Mujahideen (13%, n=7). Two of these four groups have a media arm devoted to creating their video content and acting as a media representative to the public and members of the group. There was not a large difference in quality or appeals used between groups with and without a media arm. Analysis of the data suggested Iraqi insurgent groups are using YouTube to recruit and send Anti-U.S. sentiment. There was a presence of several framing devices some of which included religious, nationalistic, anti-U.S., intimidation, and defenses. Overall, videos in the sample had a large presence of violence depicted, especially against U.S. military members.
|
2009 |
Rheanna, R. |
View
Publisher
|
Journal Article |
Deplatforming Norm-Violating Influencers on Social Media Reduces Overall Online Attention Toward Them
View Abstract
From politicians to podcast hosts, online platforms have systematically banned (“deplatformed”) influential users for breaking platform guidelines. Previous inquiries on the effectiveness of this intervention are inconclusive because 1) they consider only few deplatforming events; 2) they consider only overt engagement traces (e.g., likes and posts) but not passive engagement (e.g., views); 3) they do not consider all the potential places users impacted by the deplatforming event might migrate to. We address these limitations in a longitudinal, quasi-experimental study of 165 deplatforming events targeted at 101 influencers. We collect deplatforming events from Reddit posts and then manually curate the data, ensuring the correctness of a large dataset of deplatforming events. Then, we link these events to Google Trends and Wikipedia page views, platform-agnostic measures of online attention that capture the general public’s interest in specific influencers. Through a difference-in-differences approach, we find that deplatforming reduces online attention toward influencers. After 12 months, we estimate that online attention toward deplatformed influencers is reduced by -63% (95% CI [-75%,-46%]) on Google and by -43% (95% CI [-57%,-24%]) on Wikipedia. Further, as we study over a hundred deplatforming events, we can analyze in which cases deplatforming is more or less impactful, revealing nuances about the intervention. Notably, we find that both permanent and temporary deplatforming reduce online attention toward influencers; Overall, this work contributes to the ongoing effort to map the effectiveness of content moderation interventions, driving platform governance away from speculation.
|
2024 |
Ribeiro, M.H., Jhaver, S., Reignier-Tayar, M. and West, R. |
View
Publisher
|
Journal Article |
Auditing radicalization pathways on YouTube
View Abstract
Non-profits, as well as the media, have hypothesized the existence of a radicalization pipeline on YouTube, claiming that users systematically progress towards more extreme content on the platform. Yet, there is to date no substantial quantitative evidence of this alleged pipeline. To close this gap, we conduct a large-scale audit of user radicalization on YouTube. We analyze 330,925 videos posted on 349 channels, which we broadly classified into four types: Media, the Alt-lite, the Intellectual Dark Web (I.D.W.), and the Alt-right. According to the aforementioned radicalization hypothesis, channels in the I.D.W. and the Alt-lite serve as gateways to fringe far-right ideology, here represented by Alt-right channels. Processing 72M+ comments, we show that the three channel types indeed increasingly share the same user base; that users consistently migrate from milder to more extreme content; and that a large percentage of users who consume Alt-right content now consumed Alt-lite and I.D.W. content in the past. We also probe YouTube’s recommendation algorithm, looking at more than 2M video and channel recommendations between May/July 2019. We find that Alt-lite content is easily reachable from I.D.W. channels, while Alt-right videos are reachable only through channel recommendations. Overall, we paint a comprehensive picture of user radicalization on YouTube.
|
2020 |
Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A. and Meira Jr, W. |
View
Publisher
|
Journal Article |
‘Like Sheep Among Wolves’: Characterizing Hateful Users on Twitter
View Abstract
Hateful speech in Online Social Networks (OSNs) is a key challenge for companies and governments, as it impacts users and advertisers, and as several countries have strict legislation against the practice. This has motivated work on detecting and characterizing the phenomenon
in tweets, social media posts and comments. However, these approaches face several shortcomings due to the noisiness of OSN data, the sparsity of the phenomenon, and the subjectivity of the definition of hate speech. This works presents a user-centric view of hate speech, paving the way for better detection methods and understanding. We collect a Twitter dataset of 100, 386 users along with up to 200 tweets from their timelines with a randomwalk-based crawler on the retweet graph, and select a subsample of 4, 972 to be manually annotated as hateful or not through crowdsourcing. We examine the difference between user activity patterns, the content disseminated between hateful and normal users, and network centrality measurements in the sampled graph. Our results show that hateful users have more recent account creation dates, and more statuses, and followees per day. Additionally, they favorite more tweets, tweet in shorter intervals and are more central in the retweet network, contradicting the “lone wolf” stereotype often associated with such behavior. Hateful users are more negative, more profane, and use less words associated with topics such as hate,
terrorism, violence and anger. We also identify similarities between hateful/normal users and their 1-neighborhood, suggesting strong homophily.
|
2018 |
Ribeiro,M.H., Calais, P.H., Santos, Y.A., Almeida, A.F., and Meira, W. Jr. |
View
Publisher
|
Video |
VOX-Pol Guest Lecture Series: Conceptualising Terrorism: Challenges and Implications
View Abstract
VOX-Pol Guest Lecture Series, Autumn 2023
Name: Dr Anthony Richards
Title: Conceptualising Terrorism: Challenges and Implications
Outline: “Why is it important to generate an agreed definition of terrorism? What are the challenges that confront this and to what the extent can we view terrorism as an analytically distinctive concept?”
|
2023 |
Richards, A. |
View
Publisher
|