By Sophia Rothut & Brigitte Naderer
Political content does not spread solely organically: paid advertising campaigns on platforms such as Facebook, YouTube, and Instagram allow parties, organisations, activists, and other political actors to purchase reach for their messaging. Thereby, the advertisers’ aim is to expand their support and influence among those receiving the ad. Targeting options based on personal data tracking and analysis help to reach certain audiences that might be particularly receptive to their issues, views, and claims. First evidence indicates that the frequency of exposure to a political party’s online ads can have a persuasive effect, particularly on those with lower political knowledge and digital privacy literacy, as the consumption of these ads increases both the propensity and the choice to vote for that party.
Beyond democratic campaign opportunities, these advertising functionalities may be exploited by actors to distribute hate-based messages, disinformation, and election-influencing content via paid promotion. During the 2021 German communal elections, a local branch of the Neo-Nazi party NPD ran a Facebook ad showing a man in Middle Eastern headwear, accompanied by the line “We wish you a safe journey home”. This case was revealed by Hope not Hate. Two days after the release of the ad – an ad that reached around 2,000-3,000 people and cost less than 100 Euros –, Meta acknowledged the hate-speech violation and removed it. Furthermore, articles of investigative tests by civil society organisations exist that explored ad approval mechanisms with excerpts of a terrorist manifesto and AI-manipulated content spreading disinformation and inciting religious violence, reporting swift ad approval.
The EU’s Transparency and Targeting of Political Advertising (TTPA) Regulation and responses by tech platforms
These examples illustrate the importance of transparent and efficient advertising guidelines and processes, particularly given their growing significance and increasing cross-border nature. The EU’s Transparency and Targeting of Political Advertising (TTPA) Regulation, from which most provisions apply from October 10 2025, aims to ensure this. The main objectives are to create transparency, so that the audience is aware that of being targeted with political advertising, and harmonising existing national differences in political advertising regulations to ensure fairness, particularly in the online market. The regulation is also linked to concerns about how (micro-)targeting based on the tracking of user data could violate data protection rights. Furthermore, the EU is concerned about the potential spread of disinformation through these messages and the possible impact of foreign actors on EU-based elections.
The TTPA Regulation therefore seeks to increase transparency for target groups about when and why they are addressed by political advertising and who is responsible for the message. This should be achieved through the following regulatory features: (1) The clear labelling of political ads; (2) the establishment of a publicly accessible database for political ads, ensuring transparency about the content, aim and sponsor of political ads; (3) the restriction of political microtargeting; and (4) sanctioning in case of non-compliance.
The TTPA Regulation therefore seeks to increase transparency for target groups about when and why they are addressed by political advertising and who is responsible for the message. This should be achieved through the following regulatory features: (1) The clear labelling of political ads; (2) the establishment of a publicly accessible database for political ads, ensuring transparency about the content, aim and sponsor of political ads; (3) the restriction of political microtargeting; and (4) sanctioning in case of non-compliance.
The regulation specifically addresses online advertising and the role of very large online platforms (VLOPs). Like in the Digital Services Act, it requires them to take responsibility for the content that is shared on their platforms. Several large platforms have announced material changes to political advertising in the EU ahead of the TTPA’s application. For instance, Meta and Google announced that they will end political, electoral, and social advertising in the EU in response to the TTPA. TikTok has already chosen not to allow political advertising in 2019. Organic political posts will still remain possible, and it is likely that more emphasis on organic political posts will have implications for several types of political actors. The changes may reduce avenues for covert influence and data-driven micro-targeting. At the same time, the shift toward organic communication creates new oversight challenges that platforms will need to address.
(How) Can this reshape public political online discourse, and what are the implications for extremist online content and actors?
We assume that the reaction of ending political, electoral, and social advertising can impact the appearance of political and potentially manipulative online content and may influence the way political discourse takes place online and the extent to which extremist content can profit from it. We outline four potential dynamics in relation to extremist accounts in the following:
Dynamic 1: Extreme content disproportionately benefits from organic growth
A move away from political ads means that political actors are once again more reliant on organic growth. Extreme content and accounts could benefit more from this, as highly arousing content well-aligns with the attention economy logic: Populist and negative, fearmongering posts, as well as sensational and polarising content have been shown to generate higher engagement. These are content features that are at the core of political extreme content. This means that the online attention economy can disproportionately incentivise provocative content as spread by extremist actors – at the cost of attention for informative, constructive, and less divisive political contributions.
Dynamic 2: Growing importance of influential, individual accounts
Without the possibility to buy reach, influential accounts with many followers and – even more importantly – close bonds to their audience gain relevance, including famous politicians and social media influencers. Studies document the emergence of an alternative influence network on broader platforms such as YouTube or alternative platforms such as Telegram. They use typical creator techniques to create attention, persuade, and mainstream reactionary views (insights on this topic can be found in a VOX-Pol blog post dedicated to this topic). Discussions on establishing a far-right influencer agency, uncovered by Correctiv, already indicate that likely more resources will be invested in the creator economy and in native formats with weak or inconsistent disclosures – likely even more so now that political advertising will be restricted. This dynamic thus places more importance on a platform’s content moderation practices to ensure organic posts and discussions adhere to law and community guidelines.
Dynamic 3: Rising post-moderation requirements and lower pre-moderation/approval mechanisms, leading to changing moderation requirements
Ad libraries and sponsor verification processes were tools to check the sponsor of a political ad beforehand. Without political ads, political content does not disappear, but will most likely appear more in organic posts. While the ban of political ads reduces transparency obligations, platform moderation and legal requirements based on, for instance, the DSA remain and may alter in two aspects: First, the topic of disinformation as an illegitimate content form is likely to demand more proactive identification, as pre-checks and approval mechanisms do typically not apply to organic content. Second, undisclosed advertising can appear in the form of covert political campaigns. This increases the complexity of moderation because platforms must better distinguish between genuine expression and covert political advertising.
Dynamic 4: Attempts to disguise political content as apolitical advertising
Manipulative actors can be expected to look for loopholes to circumvent regulation and spot grey areas. A comparable case is already known, as the BBC reported: After the account of a British far-right group was banned on Facebook, another account with a more generic name purchased an ad calling to sign the group’s petition against a mosque. The TTPA’s broader definitions and traceability obligations aim to close such gaps; nonetheless, there can be a scenario where actors try to disguise their ads as apolitical and try to run them as regular ads. This circumvention technique is conceivable and seems much more likely among fringe actors than among established democratic organisations.
Sophia Rothut is a predoctoral researcher at the Department of Media and Communication of Ludwig-Maximilians-University Munich. Her research focuses on online radicalisation, mainstreaming of radical ideas, and political/far-right influencers, as well as (regulatory) approaches to counter harmful online content.
Brigitte Naderer is a postdoctoral researcher at the Center for Public Health, Medical University of Vienna. Her research focuses on media effects on children and adolescence, online radicalisation, and media literacy.