Microtargeted Propaganda by Foreign Actors

Ronan Ó Fathaigh Commentary

Microtargeting involves collecting information about people and using that information to show them targeted political advertisements. Such microtargeting enables advertisers to tailor ads to specific groups of people, for instance people who visit certain websites or share specific characteristics. Microtargeted propaganda can be more effective, more efficient and more hidden than traditional propaganda. Consequently, its use, especially by foreign actors, poses significant risks to democratic politics. EU lawmakers have several options for countering microtargeted propaganda by foreign actors, ranging from prohibition to more nuanced limits and transparency requirements. Any regulation of microtargeted propaganda by foreign actors must respect the right to freedom of expression.

Fears of foreign interference have arisen at key democratic moments across Europe over the last few years. These moments include the Brexit referendum on the UK leaving the EU in 2016; the French presidential elections in 2017; and subsequent polls in Germany, the Netherlands and elsewhere – as well as elections for the European Parliament itself in 2019. Foreign interference can take many forms, including hacking, leaking, and openly spreading disinformation and confusion. The latter includes creation and use of microtargeted propaganda: the collection of information about people and use of that information to show them targeted political advertisements.

Microtargeted propaganda was used by actors associated with the Russian government in a notorious attempt to influence the 2016 US Presidential election. Since then, it has become a regular part of scenario planning by officials, academics and security experts seeking to counter future attempts at foreign interference, including over the current crisis in Ukraine. Traditional propaganda was widespread in the Ukraine conflict in 2014 – during which the OSCE Representative on Freedom of the Media issued a Communiqué on propaganda in time of conflict – and we might now expect such propaganda to be carefully microtargeted.

However, the use of microtargeted propaganda by foreign actors has received scant attention in the legal literature, especially in a European context. This post – based on a longer, recently published article – explores if and how microtargeted propaganda differs from other foreign propaganda, and goes on to identify how lawmakers in Europe might mitigate the risks of microtargeted propaganda.

Microtargeted Propaganda: What’s New?

We define propaganda as: (i) disseminating information (ii) which is designed to (iii) mislead a population; (iv) interfere with the public’s right to know and the right of individuals to seek and receive, as well as to impart, information and ideas of all kinds; and (v) undermine the public’s trust in information, the media or public institutions. Compared to traditional propaganda, microtargeted propaganda has three novel aspects. 

First, microtargeting can be more effective. The sender does not have to spend money and time on reaching people who are not susceptible to its messages. And microtargeting enables the sender to adapt messages to the interests or fears or certain groups in the receiving country. For example, foreign actors could send a person who is concerned about immigration a series of targeted ‘news articles’ describing the approaching arrival of tens of thousands of immigrants. Because such messages are personally relevant, people are more likely to be influenced. Some studies have found that microtargeting people with political messages on specific issues increased support for the candidate sending the message and decreased support for the opponent.

Second, microtargeting can be more efficient. Microtargeting enables propaganda to be honed gradually using A/B testing (comparing the impact of multiple similar ads) before the most effective ads are served to larger populations. A/B testing usually involves automated changes to the advertisement and use of online ad auctions to make nearly instantaneous judgements about its ideal audience. Consequently, the ease with which messages are switched and replaced is qualitatively different to previous forms of political advertising.

Importantly, this concept of efficiency applies only to the ease with which customised propaganda narratives can be changed, duplicated or replaced. It does not seek to address whether microtargeted propaganda is efficient in the broader but equally important sense of return on investment: for a given euro or dollar, how much influence do you purchase with microtargeted propaganda compared to traditional means? The shift in the advertising industry towards microtargeted online advertising suggests that this broader concept of efficiency is also met by microtargeting – but we do not yet have sufficient data to confirm that this is the case.

Third, microtargeted propaganda can remain more hidden than traditional propaganda. Attribution of online foreign interference is a highly complex task. Specific pieces of content must be traced back to (often false) identities on social networks, then these identities are grouped together to identify what Facebook calls ‘coordinated inauthentic behavior’. Finally, both ‘within network’ and wider indicators, such as financial transfers, are used to identify the original actor. For example, apparent Ghanaian NGO activity in 2020 was eventually traced to companies connected to the same Russian individual associated with the Internet Research Agency that interfered with the 2016 US Presidential election. Microtargeted propaganda inherits all the difficulties of attribution associated with disinformation and cybersecurity more broadly.

Furthermore, it is also more difficult to monitor microtargeted propaganda. While researchers or observers may investigate targeted propaganda in pre-social media by analysing newspapers or television broadcasts, on proprietary social networks only the targeted groups see the messages – they aren’t served to anyone else. Publicly available advertisement (ad) libraries might be part of the solution. However, these libraries are not always accurate and provide limited information about their selection criteria, meaning that foreign actors looking to spread propaganda are still likely to profit from the opacity of microtargeting to avoid detection, journalistic scrutiny and countermeasures.

What could European Lawmakers Do?

Existing data protection law, such as the General Data Protection Regulation (GDPR) in Europe, helps mitigate the threat of microtargeted foreign propaganda. The GDPR imposes obligations on organisations that use personal data (‘data controllers’) and gives rights to people whose personal data are used (‘data subjects’). The GDPR applies as soon as personal data (‘information relating to an identified or identifiable natural person’) are processed. The GDPR defines processing broadly; almost everything that can be done with personal data falls within the processing definition.

The GDPR makes microtargeting more difficult and more expensive in the EU than it is in, for instance, the US, which does not have a GDPR-like statute. In the US, many files on individuals can be bought from data brokers and party affiliations are often public, whereas in most EU countries it is very hard to buy voter files. For countries without GDPR-like rules on the books, adopting such rules would be an essential step towards a regulatory environment that can protect their society from microtargeted propaganda.

Propagandists can, however, target advertising without collecting their own data, for instance by advertising via Facebook. Facebook faces regular legal challenges in Europe because of data protection law, but enforcement of the GDPR against Facebook could be stronger. In December 2020, the European Commission published a proposal for a new Digital Services Act (DSA) that would place a raft of new obligations on online platforms in hopes of ensuring a ‘safe, predictable and trusted online environment’ and that fundamental rights are ‘effectively protected’ online. The added effect of the DSA is yet to be seen, as amendments by the European Parliament to the Commission’s proposal are still under consideration at the time of writing. More generally, it is well documented that legal requirements of transparency and consent have only limited effect, so other options should also be considered.

The most draconian option would be to ban microtargeted propaganda, by both domestic and foreign actors, outright. Such a ban could be framed along the lines of the current wave of false information laws sweeping the world. For example, under Singapore’s Protection from Online Falsehoods and Manipulation Act 2019, it is an offence to communicate a false statement that is ‘likely’ to influence the outcome of an election or a referendum; to ‘incite feelings of enmity, hatred or ill-will’ between different groups of people; or to ‘diminish public confidence’ in the performance of the Singapore government or any government agency. However, such a policy option raises acute freedom of expression issues, as discussed below.

A more targeted form of regulation could be framed along the lines of Canada’s Elections Modernization Act 2018, which sought to prohibit online political advertising by certain foreign actors. Under the 2018 act, it is illegal to sell advertising space to a foreign political party, a foreign government or an agent or mandatary of a foreign government ‘for the purpose of enabling that person or entity to transmit an election advertising message or to cause an election advertising message to be transmitted’.

This type of regulation would also raise problems in the context of EU Member States, in that it would prohibit (for example) a Dutch political party from buying online ads in Belgium or France to support a similarly aligned party. A related approach might be to prohibit foreign political microtargeting from outside the EU, but allow political microtargeting between EU Member States. However, all EU Member States are also members of the Council of Europe, a larger 47-country international organisation, which includes non-EU countries such as Russia, Ukraine and the United Kingdom. This means that all EU member states are subject to the European Convention on Human Rights, which guarantees freedom of expression across borders between its member states. The ECHR guarantees the right to ‘receive and impart information and ideas without interference by public authority and regardless of frontiers’. The European Court of Human Rights has held that the right to freedom to receive information prohibits a government from restricting a person from receiving information that others wish to impart to him or her. Any restriction on the privileged category of political speech – which includes advertising – must therefore be ‘narrowly interpreted’, and its necessity ‘convincingly established’ by the relevant government.

As an alternative to legislation, EU countries could adopt voluntary codes of conduct between governments and online platforms. In 2018, several platform providers, including Facebook, Google and Twitter, signed the EU Code of Practice on Disinformation (2018). In March 2021, in the run-up to the Dutch parliamentary elections, a voluntary code of conduct on transparency in online political advertisements was agreed to by the Dutch Ministry of the Interior and Kingdom Relations, a number of online platforms, Dutch political parties and the International Institute for Democracy and Electoral Assistance (an intergovernmental organisation to support and strengthen democratic institutions). The major drawbacks of such codes of conduct are that they lack transparency in terms of actual implementation and delegate regulation to platforms.

An additional approach would seek to bring transparency to microtargeted propaganda, allowing journalists, regulators and others to see who is paying for ads from abroad. Regulation could, for instance, impose an obligation on those purchasing ads to include a disclaimer about their identity, or oblige online platforms to verify the identity of those purchasing ads. The French 2018 Law on Manipulation of Information obliges online platforms to provide users with ‘fair, clear and transparent’ information on the identity of the person or group which has paid for promoted content relating to a ‘debate of general interest’.

Overall, when regulating advertising and microtargeted propaganda, lawmakers must respect the right to freedom of expression. While lawmakers can, in specific situations and under certain circumstances, limit or even prohibit certain types or sources of microtargeted propaganda, measures to improve transparency may be the best option in the short term.

This blog post is based on a new paper (open access) in the Maastricht Journal of European and Comparative Law (available here).

Thumbnail Image credits: @LightFieldStudios on @EnvatoElements


About the Author

Ronan Ó Fathaigh

Ronan Ó Fathaigh is a Senior Researcher at the Institute for Information Law (IViR), University of Amsterdam, and specialises in fundamental rights, in particular freedom of expression and privacy. He is a member of the Digital Transformation of Decision-Making (DTDM) research initiative at the Amsterdam Law School, and a member of the Digital Services Act (DSA) Observatory at IViR.


Frederik Zuiderveen Borgesius

Frederik Zuiderveen Borgesius is ICT and Law Professor at Radboud University Nijmegen, where he is affiliated with the iHub, Radboud's interdisciplinary research hub on digitalisation and society. His research mainly concerns human rights such as the right to privacy, to the protection of personal data, and to non-discrimination in the context of new technologies.

About the Author


About the Author

Frederik Zuiderveen Borgesius

Frederik Zuiderveen Borgesius is ICT and Law Professor at Radboud University Nijmegen, where he is affiliated with the iHub, Radboud's interdisciplinary research hub on digitalisation and society. His research mainly concerns human rights such as the right to privacy, to the protection of personal data, and to non-discrimination in the context of new technologies.


James Shires

James Shires is an Assistant Professor in Cybersecurity Governance at the Institute of Security and Global Affairs, University of Leiden, and a Fellow with The Hague Program for International Cyber Security and the Cyber Statecraft Initiative at the Atlantic Council. He has written widely on issues of cybersecurity and international politics, including cybersecurity expertise, digital authoritarianism, spyware regulation, and hack-and-leak operations.

About the Author

Share this Article