Post Tags
Artificial intelligenceThis contribution discusses the challenges posed by AI-generated deepfakes in elections and the efforts being made to combat them, in particular the need for collective action and collaboration among governments, civil society, and industry to safeguard democracy in the face of rapidly advancing technology.
2024 is an extraordinary year for both democracy and technology. More countries and people will participate in voting for their elected leaders than ever before in human history. Simultaneously, Artificial Intelligence (AI) development is advancing rapidly, offering remarkable benefits but also enabling malicious actors to create realistic deepfakes. The contrast between the promise and peril of new technology has never been more pronounced.
New generative AI tools make it possible to create realistic and convincing audio, video and images that simulate or alter the appearance, voice or actions of real people. The costs of creation are low, and the results are stunning. AI introduces a new and potentially more dangerous form of the type of online manipulation – from fake websites to social media bots – the world has already been facing for over a decade. The public has witnessed this problem expand quickly in recent months as the risks to our elections have become apparent. In advance of the New Hampshire primary, US voters received robocalls that used AI to fake the voice and words of President Biden. This followed the documented release of multiple deepfake videos, beginning in December, of UK Prime Minister Rishi Sunak.
The challenges we see emerging demand collective action from all who value democracy: governments, civil society and industry. Recently, at the Munich Security Conference, the tech sector took an important step forward by announcing the Tech Accord to Combat Deceptive Use of AI in the 2024 Elections. Over 20 companies committed to a clear goal: to counter video, audio and images that manipulate the appearance, voice or actions of political candidates, election officials and other stakeholders.
The Accord outlines eight specific commitments. Each deserves careful consideration, however, three critical aspects stand out:
- Addressing deepfake creation: The commitments will make it harder for malicious entities to misuse legitimate tools for creating deepfakes. In part this requires content generation companies to fortify safety measures in AI services. Implementing the commitments will involve risk assessments, introduction of robust controls, red team analysis, pre-emptive classifiers and automated testing. Companies will also focus on the authenticity of content by advancing content provenance and watermarking, making it easier to identify what is real and what is manipulated content.
- Respond to deepfakes: The reality is that bad actors will invest in creating deceptive content in relation to the elections, and understanding where they are deploying this content is essential. Microsoft is already driving implementation of these commitments by leveraging our AI for Good Lab and Microsoft Threat Intelligence Centre Teams for better detection. We have also launched a web page where political candidates can report deepfake concerns. Additionally, we have committed to expanding the Digital Safety Unit to address abusive online content, including deepfakes, while preserving free expression.
- Transparency and resilience The final three commitments within the Accord address the need to foster greater general awareness of the challenges we face. We believe enhanced corporate transparency is an important element of that. For Microsoft will document our findings annual transparency report on applying our policies. We are also committed to ongoing collaboration with global civil society organisations, academics and subject matter experts, who have for centuries worked to advance democratic rights. Finally, public awareness about AI is essential to creating a more resilient society, which is why the Tech Accord signatories committed to furthering public education about the fundamentals and risks associated with AI.
All in all, the Accord helps move the tech sector farther and faster to address this pivotal issue at a unique point in time for elections around the world. However, the challenge is immense, requiring collective ambition and adaptability. Bad actors will be innovating as well, and the underlying technology is continuing to change quickly. We’ll need to continue to learn, innovate and adapt.
With that in mind, while the Accord represents a significant step, it cannot be the sole solution to safeguarding elections. Voluntary initiatives by the technology sector cannot be the only line of defence. The Accord builds on established government initiatives, such as the voluntary White House commitments; the European Union’s Digital Services Act, with its focus on the integrity of electoral processes; the Code of Practice on Disinformation; and the upcoming Artificial Intelligence Act, which will introduce additional transparency requirements for providers and deployers of AI systems that generate or manipulate content.
These are important efforts, but they will not be enough. Further collaboration with and among elected leaders and democratic institutions will be crucial if we are to succeed. This is why it was exciting to hear the British Deputy Prime Minister announce plans to work on a government compact in this space at this year’s Summit for Democracy in Seoul. Only by working together can we preserve timeless values and democratic principles in a time of enormous technological change.
Thumbnail image: credits to @johnnyhammer on Unsplash