Back to overview

TED Webinar Safeguarding Democracy and Elections in the Age of AI: Key Takeaways from the Webinar

AI’s Nuanced Role in Democracy 

Artificial intelligence is transforming democratic processes, both empowering citizens and enabling manipulation. A recent webinar hosted by the Team Europe Democracy Secretariat, with experts from International IDEA, Article 19, Safer Internet Lab, and the German Federal Ministry for Economic Development and Cooperation (BMZ), explored AI’s impact on elections and civic life. The event was moderated by Julia Keutgen, Programme Manager at international IDEA, and opened by remarks from Jakob Rieken, Senior Policy Officer for Governance at the BMZ. 

Empowerment and Exploitation, and Information Pollution

AI’s potential to enhance democracy is real: In Taiwan, civic tech initiatives use online deliberation tools to help citizens discuss and provide input on legislation. In Kenya, civic tech projects use digital tools for election monitoring for tracking legislation. Translation and speech technologies can help make political discourse more inclusive by bridging language barriers, particularly for multilingual societies. Yet, AI also amplifies threats. The rise of information pollution, where citizens are bombarded with a mix of truth, half-truths, and outright falsehoods, makes it nearly impossible to discern fact from fiction. This phenomenon, analysed in International IDEA’s recent report and seen in elections in 2024 from Bangladesh to South Africa, corrodes trust in democratic institutions, depresses voter participation, and fuels polarization. In Romania, Indonesia, and Mexico, AI-generated or AI-amplified disinformation has been reported around elections, eroding public trust. While disinformation does not need AI to be effective, AI definitely raises the stakes and supercharges the sophistication of campaigns. Authoritarian regimes use AI for mass surveillance and repression, while women and minorities face targeted harassment and abuse. The “liars’ dividend”, dismissing real information as AI-generated, further undermines trust. 

Global Power and Governance Gaps

Starting the conversation, the German Federal Ministry for Economic Cooperation and Development (BMZ) emphasized the pressing need to update the “guardrails of democracy” in light of AI’s impact on politics and fundamental rights. BMZ called for a rights-based, people-centred approach consistent with international standards. Barbora Bukovská, Senior Director for Law and Policy at Article 19, shared insights from a recent BMZ-commissioned report and highlighted that the AI landscape is shaped by geopolitical competition, with the US, China, and EU leading development. This rivalry often sidelines democratic oversight, leaving the Global South vulnerable. A handful of tech companies dominate AI, prioritizing profit over public interest and sometimes compromising ethical research. For many countries, AI governance frameworks from the West are ill-fitted. Local expertise and inclusive, context-sensitive regulations are essential to prevent exploitation and ensure accountability. 

Case Studies: Lessons from Around the World

 Indonesia - Disinformation and Online Manipulation: Domestic “buzzer” networks use automated and coordinated accounts to spread propaganda, often targeting women and minorities. Alia Yofira Karunian from the Safer Internet Lab and the PurpleCode Collective highlighted how viral cases—such as the “ibu berjilbab pink” protester—illustrate the gendered nature of online attacks and the spread of manipulated content. The case demonstrates that many AI detection tools trained primarily on English data perform poorly in Indonesia’s multilingual online space, raising risks of false positives and over-blocking. 

Mexico and South Africa - Collaboration in Action: Thijs Heinmaa from International IDEA highlighted that Mexico’s electoral authorities partnered with civil society and tech platforms to create fact-checking hubs and the INE WhatsApp bot “Inés.” South Africa’s Electoral Commission (IEC) signed cooperation agreements with Google, Meta, TikTok, and civil society groups to combat disinformation through initiatives like the Real411 reporting platform. While promising, these initiatives reveal the limits of voluntary agreements with tech platforms, underscoring the need for binding regulations. 

Solutions: Regulation, Education, and Collaboration 

  1. Regulation and Oversight: Effective rules must be developed through multi-stakeholder processes, involving civil society and independent institutions. Clear guidelines for AI use in elections are critical: such as mandatory disclosure of AI-generated content and bans on surveillance that violate privacy.
  2. Education and Media Literacy: Investing in digital literacy helps citizens critically engage with information. Supporting independent journalism is vital for maintaining public trust and information integrity.
  3. International Cooperation: Global standards rooted in human rights can mitigate cross-border risks. Strengthening the capacity of election bodies, law enforcement, and civil society is crucial, especially in resource-limited regions. 

The Way Forward 

Safeguarding democracy in the age of AI requires a balanced approach: harnessing AI’s potential for good while mitigating its risks. This means ethical deployment, robust regulation, digital literacy, and global cooperation. The goal is not just to protect democracy from AI’s dangers, but to shape its development in ways that empower citizens and strengthen democratic resilience. Read more about the webinar here: Democratic Resilience in the Age of Artificial Intelligence–TED joint webinar | Capacity4dev

About the authors

Alisa Schaible
Programme Assistant
Close tooltip