Off the Richter Scale: Tracking Misinformation in the Aftermath of the Kahramanmaras Earthquake – The Failing of Twitter’s Blue Tick Policy

By Ashton Kingdon & Briony Gray

The cost of misinformation can be deadly during crisis. Undermining public trust, emergency response effectiveness and potentially life-saving activities, misinformation has become an increasing trend in the aftermath of natural disasters that has spread like wildfire across a global audience. The expansion in the use of intelligence systems has resulted in classifications, decisions, and predictions frequently being based on algorithmic models ‘trained’ on large datasets of historical and social trends. An unforeseen concern has been the exposure of citizens to disinformation, fake news, and extremist propaganda. This has become a global socio-technical issue, as increasing numbers of people utilise social media to increase the volume, diversity, and availability of propaganda and disinformation. With developments in the addictiveness and gamification of social media sites, it is now even easier than before for content to go viral and snowball into an online communications crisis.

The Power of Social Media

Modern political and health communication crises – such as the Coronavirus pandemic – illustrate how social media now have unprecedented power. They became a catalyst for the politicisation of the pandemic, as conspiracy theorists and peddlers of alternative health facts utilised platforms to circulate various disinformation campaigns. The pandemic saw deliberate and concentrated misinformation campaigns, with the environment of fear and chaos being exploited by malicious actors to spread false information and conspiracy theories. The politicisation of what should be factual, scientific information is particularly problematic, as it can lead to long-term problems, fuelling science scepticism among citizens and fostering the spread of misinformation. Crucially, the success of misinformation conventionally depends on the existence of a context where fact-checking and evidence-based reasoning does not always rule, and experts and expertise are dismissed. As content continues to be produced at volumes far beyond human capacity for verification, this makes online policing and fact-checking a whole new beast to tackle.

A notorious example of this in recent news is that of Twitter’s infamous “blue tick” policy. Prior to November 2020, a blue verified badge on Twitter let people know that an account of public interest is authentic: this was popular among government official accounts, news organisations and journalists, organisations, and activists. Ultimately, it signalled to users that the account could warrant more trust in the information it was publishing. The introduction of Twitter Blue – an opt-in, paid subscription that adds users a blue checkmark to their account – has caused confusion about which accounts are now considered authentic and so, trustworthy. Users of the platform have quickly identified loopholes in the system, creating new verified accounts to gain the trust of the public by impersonating others. A tweet from a verified account posing as a pharmaceutical company Eli Lilly, for example, claimed to be giving away free insulin which caused huge ramifications.

The Earthquake Crisis

In the week following February 6th 2023, when a 7.8 magnitude earthquake struck south-eastern Turkey near the Syrian border, Twitter’s revamped blue tick verification once again proved to be a problem. Fake news has been identified from multiple accounts posing as news and media outlets, showing footage of a nuclear power plant allegedly exploding from the disaster: only, Turkey does not actually have any operational nuclear power plants, and its only one under construction – the Akkuyu Nuclear Power Plant in Buyukececli – remained undamaged (as confirmed by Turkish authorities). The footage was eventually traced back to the largest non-nuclear blast in modern history, which took place on August 4th 2020, in Beirut, Lebanon, after an estimated 2750 tons of unsafely stored ammonium nitrate exploded.

In other videos circulating in the aftermath of the disaster that has so far killed at least 50,000 people and rising, content has shown a tsunami hitting the Turkish coast (traced back to the South African city Durban in 2017), multiple buildings crumbling (traced to Florida in 2021). In another, the image of a child crying surrounded by rubble has been extensively retweeted as the face of the disaster; however, this is actually a staged stock-image photo which is sold online. Cases like this are being debunked by the public using reverse image searches, which allows for the original posts to be found and geo-located. These kinds of fake tweets not only accelerate misinformation and uncertainty, but actively can derail methods of modern disaster management which process and identify tweets, flagging up geo-located areas and hazards that can be targeted by emergency responders. While such methods have many steps of verification to ensure resources are not wasted, misinformation that is spread in such huge volumes essentially makes identifying real, valuable, and accurate information more difficult.

Misinformation & Twitter’s Blue Tick

Several days after the earthquake – following thousands of reports and fake news and misinformation and as online criticism was mounting regarding the government’s response – Twitter was made inaccessible in Turkey. Access to Twitter was restored following authorities meeting with the platform to remind it of its obligations on content takedowns and disinformation. Turkey has an extensive history of social media restrictions during national emergencies and safety incidents. In 2014, the government blocked access to Twitter following the leak of politically damaging recordings, and in 2015, over the publication of photographs of Mehmet Selim Kiraz, who had died after being taken hostage. In 2018, the Turkish government cracked down on Twitter users who were using the platform to voice their criticism of Turkish military operations in northern Syria, claiming they were spreading “terrorist propaganda”. Other countries across the world have also censored Twitter, most notably in 2009, during the Iranian Presidential election, the government blocked Twitter due to fear of protests being organised. In February 2022, during the invasion of Ukraine, Russia began restricting access to Twitter, with global internet NetBlocks observing that the censorship measure was in effect across multiple providers. Twitter was also made inaccessible in Egypt in 2011 during the Arab Spring – a revolutionary wave of demonstrations and protests that occurred in 2010 and 2011 throughout the Middle East and North Africa that led to several countries defying and dismantling their authoritarian governments.

Since the emergence of the platform’s blue tick verification however, Twitter’s potential for crisis response needs to be re-evaluated, as verified accounts are now no longer producing measurably trustworthy information. To tackle this, more steps for verification need to be introduced, which will extend the processing time of any kind of automatic classification methods for disaster response; leading to slower response during crises. Another option would be to utilise crowdsourcing methods for the identification of fake tweets. These kinds of processes are generally run and maintained by governments, aid organisations, response agencies, NGOs, humanitarian organisations and research institutes, meaning that re-evaluation and testing is needed internally. However, is also a responsibility for social media themselves to do more about the content they allow, and to step up verification methods and policing.

In response to globally emerging misinformation, Twitter have been put under pressure to resolve the verification issues with the blue tick. Since April 2023, the organisation has announced that its old legacy verification will begin to be phased out. The platform has also removed the criteria needed to meet credibility for Twitter Blue: no longer checking for activeness, notability or authenticity. Instead, to address fake accounts that may be impersonating others, accounts that alter their profiles will temporarily lose their check mark until the Twitter team reviews their subscription and deems them verified again. This may affect future disaster management methods by impacting how accounts built trust over time (which users naturally turn to as more trusted sources during times of crises). For example, if your go-to-news site intermittently has or has not received a verified tick this could cause users to think twice about trusting the information it tweets.

Twitter Blue subscriptions will additionally include early access to new features as well as the blue tick, including increasing tweet characters up to 4000 (compared to its standard 280). While this may entice users to subscribe to Twitter Blue, it may cause further issues for misinformation – especially during emergencies and disasters – where tweet content needs human verification to avoid the spread of fake news (as was the case in Turkey). This may have further implications for disaster management processes by reducing the timeliness and credibility of information, which could be used for resource allocation, information sharing and geolocation services (e.g. the longer the tweet content, the more time needed to process and verify its content). This may further reduce Twitter’s versatility and use during emergencies, and will be interesting to observe in future crises.

Conclusions

The unfolding events following the earthquake in Turkey represents another case for concern in a modern world that is increasingly connected and online. It is showing that, much like other global health, political, and communication crises in previous years, social media is developing even more global influence and power. In response, there is now a pressing need for social media platforms themselves to step up their verification and policing. It also highlights the power of the public or the “crowd” in identifying fake news and misinformation. With the increasing impacts of climate change, humanitarian crises, unstable political structures, and unprecedented levels of insecurity over resources, the need to respond to emergencies in an effective and timely way is paramount. The question is, how can we keep reaping the benefits of social media platforms for this in a new paradigm of media, while limiting its flaws.


Dr Ashton Kingdon is a lecturer in Criminology at the University of Southampton. Her work is interdisciplinary combining criminology, history and computer science to examine the ways in which extremists utilise technology for recruitment and radicalisation. Additionally, her expertise lies in analysing the relationship existing between climate change and terrorism.  

Dr Briony Gray is a senior researcher specialising in building community resiliency throughout disasters, emergencies and hazards. She has worked with international governments, humanitarian organisations and academic institutes in facilitating community voices for improving risk, response, and resiliency.  

Image Credit: Freepik

Want to submit a blog post? Click here.

 

Leave a Reply