Back to overview

Lessons on AI from Francophone and Lusophone Africa to the world #6

AI for Electoral Actors
Digital spaces have become key arenas for citizens to receive and share information, turning social media platforms into primary sources of political communications.

In this digital era, millions of voters can easily access civic education, but are also exposed to waves of misinformation and disinformation. With the rapid rise of generative artificial intelligence (AI), the challenge has become even more complex: deepfakes, automated propaganda, and biased content curation now threaten to distort democratic discourse and undermine public trust in elections.

Against this backdrop, International IDEA hosted the fifth and final workshop in its AI for Electoral Actors series on 30 September – 1 October in Dakar, Senegal. Organized in collaboration with the Direction Générale des Élections du Sénégal (DGE) and supported by Microsoft and OpenAI, the event gathered over 40 participants representing Electoral Management Bodies (EMBs) and civil society organizations (CSOs) from across Western African countries and other francophone and lusophone African countries.

Building on insights from earlier regional workshops in Kuala Lumpur, Tirana, Johannesburg, and Panama City, the Dakar meeting turned its focus to one of the most pressing dimensions of AI’s influence on democracy: content moderation and the regulation of AI-generated content during elections.

AI, Social Media, and the New Battleground for Electoral Integrity

Participants examined how political information reaches voters through AI-powered social media algorithms, shaping the data they receive. These platforms – due to financial incentives- rely heavily on engagement, often resulting in the amplification of content that triggers powerful emotional responses which is more likely to prompt the user to like, comment, or share a post. This creates a negative spiral, where the more eye-catching or sensationalist content is, the more it gets pushed onto users. Engagement-based algorithms inherently nurture misinformation and polarizing narratives, as they tend to oversimplify or dramatize information that is otherwise nuanced and complex in ways that are more likely to instinctively resonate with users.

Speakers of the workshops noted that, in many West African countries, these algorithms have effectively replaced traditional newsrooms as the gatekeepers of political discourse. This “attention economy,” as discussed during the workshop, rewards virality over veracity.

At this stage, participants agreed that EMBs are facing key obstacles: they must understand how AI-driven systems influence information dissemination, and ensure trustworthy information is accessible to all. “The most engaging content is not always the most democratic,” one participant remarked, a sentiment that echoed across sessions.

A recurring concern throughout the discussions was the growing prevalence of AI-generated deepfakes and their disproportionate impact on women in politics. Women are overwhelmingly targeted by deepfakes, often through non-consensual or sexually explicit imagery intended to humiliate, discredit, and silence them. This trend is particularly alarming in the context of women’s political participation. Due to persistent gender norms that are also not left out from the online sphere, female politicians are attacked in fundamentally different ways than their male counterparts. While men tend to face criticism focused on their political views or actions, women are more often targeted for their appearance, family life, or personal choices. Deepfakes amplify these gendered forms of abuse, making the attacks deeply personal and psychologically damaging - ultimately deterring women from engaging in politics and infringing upon their fundamental rights.  

Pillar 5: AI Content Curation and Moderation

In February 2025, a deepfake audio clip surfaced on TikTok and WhatsApp, falsely attributed to RFI and France 24. The four-minute recording, presented as a news commentary, appeared to question a report written by Senegal’s Cour des Comptes, a key accountability institution, implying misconduct in the handling of public funds. The clip quickly went viral across multiple platforms and messaging groups. When discussing this type of scenario, Civil Society Organizations called for stronger public-private partnerships between fact-checkers, regulators, and EMBs to develop rapid-response systems capable of identifying and countering such harmful content. Participants echoed these thoughts during the discussion, agreeing that these issues directly impact fundamental rights that must be protected.

The discussions in Dakar highlighted the urgent need for context-sensitive and contingent content regulation frameworks that safeguard democratic values without stifling free expression. Participants debated proportional and graduated approaches to regulation, emphasizing transparency, accountability, and open communication with the public.

Panelists pointed out that disinformation campaigns often exploit gaps in institutional capacity. Thus, strengthening local collaboration between EMBs, civil society, and media actors was identified as key to building resilience, in the hopes of addressing these gaps through a whole society approach.

The workshop also stressed the importance of digital literacy and gender-responsive policies, urging EMBs to invest in training programs that equip electoral officials to recognize AI-generated content and respond effectively to online manipulation.

However, participants also underscored a structural challenge that complicates these efforts: the limited access of Electoral Management Bodies to content moderation and curation mechanisms within major social media platforms. Because most platforms operate as closed ecosystems, their algorithms, data-sharing protocols, and moderation tools remain largely opaque to external actors. This lack of transparency prevents EMBs from understanding how electoral content is promoted, suppressed, or misclassified online. Without structured partnerships and data-sharing agreements with Very Large Platform Holders (VLPHs), it becomes nearly impossible for EMBs to track or respond to coordinated disinformation campaigns in real time. As several participants noted, meaningful progress in countering AI-driven electoral disinformation will require not only stronger national regulation but also deeper cooperation with these global technology companies.

While the conversation around AI regulation is gaining global momentum, with instruments such as the EU AI Act offering potential models, participants in Dakar emphasized that African realities must shape African solutions. The African Union’s Continental AI Strategy, adopted in July 2024, reflects this ideal by promoting a “Local First” approach. This ensures that AI development is grounded in Africa’s own contexts, values, and priorities. This strategy embraces a prosperous and integrated Africa where responsible and ethical AI drives inclusive growth, social resilience, and cultural renaissance.

Rather than importing frameworks designed for other regions, the AU’s approach calls for context-sensitive governance, rooted in human rights, inclusion, and transparency. This is aimed to advance Agenda 2063’s vision of “an integrated, prosperous and peaceful Africa.” Its governance model promotes a multi-tiered regulatory ecosystem, linking continental, regional, and national institutions, to ensure AI systems are safe, fair, and accountable. It also calls on member states to adapt existing laws (from data protection and consumer rights to gender inclusion and anti-discrimination) to cover AI-related risks such as bias, misinformation, and digital exploitation.

As discussed throughout the series, imported regulatory frameworks may fail to capture the region’s linguistic diversity, informal communication networks, and deep-rooted mistrust of institutions. The imported nature of AI-systems only worsens these concerns, as they may have been trained on biased datasets that do not reflect the lived experiences of citizens in Western Africa. Which is why the AU strategy highlights that regulation alone is insufficient without investment in infrastructure, education, and data sovereignty. It urges African countries to develop home-grown AI strategies that strengthen datasets, computing capacity, and skills development, while ensuring equitable access to technology. Regional cooperation is central: the AU envisions shared data exchange frameworks, joint research platforms, and AI ethics boards to monitor emerging risks, including deepfakes and algorithmic bias.

Instead, EMBs and policymakers were encouraged to develop home-grown strategies that blend technological innovation, civic education, and community-based oversight. Only through such inclusive approaches can AI be harnessed as a force for democratic strengthening rather than disruption.

As the final stop in this series of workshops, the Dakar workshop symbolized another step in an incredible learning process that the team will be continuing. Over the course of these five regional meetings, participants from across Africa, Asia, Latin-America, and Europe have collectively shaped a vision for responsible, transparent, and equitable AI integration in electoral processes.

The conversations in Senegal reaffirmed a central truth: protecting electoral integrity in the age of AI requires collective vigilance, informed regulation, and cross-sectoral, multi-stakeholder cooperation.

In the words of one participant, 

“Disinformation thrives where trust is weak. Building resilience means rebuilding trust -between institutions, citizens, and technology itself.”

Catch up on the article series so far by reading the first article to get the full picture of a democratic AI foundation, the second article on AI Literacy and the workshop in Kuala Lumpur, the third article on AI ethics and the workshop in Tirana, the fourth article on AI in electoral management and the workshop in Johannesburg, and the fifth article on AI in electoral management and the workshop in Panama City. Also, make sure to read the bonus article on the EU AI Act and AI literacy. 

About the authors

Enzo Martino
Programme Assistant
Close tooltip