By Samuel Olaniran
In global discussions about online extremism, messaging apps such as WhatsApp often receive less attention than platforms like Facebook, X, or TikTok. Yet WhatsApp has become one of the most powerful political tools in the Global South. Its encrypted design, ubiquity, and integration into everyday social life make it a particularly attractive space for the spread of disinformation and hate. In a new chapter in WhatsApp in the World, titled ‘Beyond Algorithms: How Politicians Use Human Infrastructure to Spread Disinformation and Hate Speech on WhatsApp in Nigeria’, I argue that the drivers of political disinformation in Nigeria are not primarily technological but human. Rather than bots or algorithms, it is networks of party officials, volunteers, influencers, and loyal supporters who deliberately mobilise the app to circulate harmful and divisive content. This post will outline this argument and summarise the key findings from the study underpinning the chapter.
Building human infrastructures of disinformation
The chapter shows how Nigerian political actors built an elaborate “human infrastructure” during the 2023 presidential election. Campaign teams established WhatsApp groups for each state and cascaded messages through coordinators and subgroups to reach the grassroots. These groups were not limited to sharing manifestos or organising rallies. They were also used to disseminate fabricated stories, doctored videos, and emotive voice notes designed to inflame suspicion and resentment. Messages painted the ruling All Progressives Congress (APC) ticket of Bola Tinubu and Kashim Shettima as part of an Islamist plot to “Islamise” the country, while Labour Party candidate Peter Obi was framed as sympathetic to the proscribed Indigenous People of Biafra (IPOB). Such claims blurred the line between ordinary electoral competition and narratives of extremism and terrorism.
The potency of this tactic rests in WhatsApp’s affordances. Unlike open platforms where content gains visibility through algorithms, WhatsApp thrives on interpersonal trust. Messages are forwarded through personal networks (family groups, professional circles, or religious communities) where the fact of a trusted contact sharing a message often overrides doubts about its accuracy. Disinformation in this setting becomes socially validated rather than algorithmically boosted. This distinction is important because it makes regulation more difficult. As past research has shown, hateful disinformation is deeply entangled with extremist mobilisation.
From hate speech to extremist narratives
What stands out in the Nigerian case is how disinformation was infused with hate. These were not simply misleading claims about policies or candidates but deliberate efforts to stigmatise communities, inflame ethnic and religious divides, and brand opponents as extremists or terrorists. This aligns with work on “hate spin”, which describes the manufacturing of outrage as a political tactic. And, other research which underscores how online “hate speech often overlaps with extremist recruitment and radicalisation”. In the Nigerian context, the spread of false claims about Islamist plots or Biafran sympathies weaponised identity politics in ways that echo extremist propaganda globally.
Fear was a recurring theme. When public trust in democratic institutions is weak, as has long been the case with the Independent National Electoral Commission (INEC) in Nigeria, rumours of rigging or voter suppression find ready audiences. Disinformation targeting the Commission itself further eroded trust in the electoral process. Such narratives not only delegitimise elections but also provide fertile ground for political violence. Research published in MediaWell has highlighted how disinformation primes populations for unrest, while another study by George (2024) has shown how WhatsApp-driven rumours can escalate into civil conflict in fragile democracies. By embedding these dynamics within electoral competition, political parties risk normalising the logic of extremism: that opponents are not merely rivals but existential threats.
The chapter also shows how disinformation travels across platforms. WhatsApp groups incubated content that was then seeded onto Twitter (now X), ensuring wider circulation. False allegations that Tinubu was implicated in drug trafficking, or that he was terminally ill and unfit for office, migrated rapidly from intimate WhatsApp groups to large public conversations. Here again, the pattern resonates with extremist online ecosystems. As Jackson and Berger note in “The Dangers of Generative AI and Extremism,” harmful narratives are increasingly sustained by human coordination even as new technologies open additional pathways for disinformation and radicalisation.
These findings complicate the assumption that technology alone explains the spread of extremist and hateful content. In Nigeria, human networks outmanoeuvred platform restrictions by adapting their tactics. Updates that curtailed bulk messaging forced parties to rely more heavily on volunteers, influencers, and word-of-mouth mobilisation. In some cases, canvassers were dispatched to rural areas with merchandise and talking points, coordinated via WhatsApp. What emerges is a portrait of a highly adaptive ecosystem in which technological changes do not eliminate disinformation but shift the burden back onto human actors. This underlines the conclusion that combating extremist messaging cannot rely solely on algorithmic solutions but must account for the social infrastructures that sustain disinformation and hate.
Beyond Nigeria: Global lessons on disinformation and extremism
The implications extend well beyond Nigeria. Across the Global South, WhatsApp has become an essential political communication tool. Similar dynamics have been observed in India, Brazil, and South Africa, where disinformation and hate speech thrive on the app’s intimacy and trust networks. A 2025 GNET study shows that extremists now operate not in isolated pockets but across 26 platforms, guiding audiences on complex digital journeys from hate to mobilised support. Recent research has further demonstrated how online hate speech increasingly functions as a precursor to extremist violence, reinforcing the view that disinformation actively enables radicalisation. Reports such as Control Risks’ Rising Political Violence (2025) further warn that disinformation and conspiracy narratives are fuelling violent incidents in diverse political contexts. The spillover of these narratives into broader ecosystems is also explored in “From TikTok to Terrorism?”, which shows how hate and disinformation migrate seamlessly across private and public platforms.
WhatsApp in Nigeria illustrates how disinformation, hate, and extremist framings are woven together within human infrastructures that operate beyond algorithms. Politicians and their networks exploit these infrastructures to galvanise supporters, undermine trust in institutions, and stigmatise opponents as terrorists or extremists. What begins as electoral manoeuvring risks seeding radical narratives that persist long after the ballots are counted. This chapter calls for an approach that does not only focus on technological fixes but that understands disinformation as a deeply human practice, inseparable from hate and extremism, and inseparable too from the offline risks of violence and terrorism.
Samuel Olaniran is a Lecturer at the Wits Media Studies Department at the University of Witwatersrand, South Africa.
Image Credit: Grant Davies/Unsplash