Back to overview

What does electoral AI look like in practice? #4

The AI for Electoral Actors project logo and text stating

While much of the world still speaks of Artificial Intelligence (AI) as a glimpse into the future, in African elections, that future has already arrived. From Bimodal Voter Accreditation Systems (BVAS) in Nigeria to Whatsapp-based voter education chatbots developed by Kenya’s Electoral and Boundaries Commission, electoral management bodies (EMBs) across the continent are actively integrating AI into their operations—transforming how elections are administered in real time.  

The implementation of this technology is not without risks. An overreliance on algorithmic systems without due diligence might create blind spots that spell out serious risks for the freedom, fairness, and integrity of elections. At the same time, AI’s potential to improve the efficiency of electoral administration is undeniable. AI is here to stay in the realm of elections; all actors should engage in finding safeguards and mitigation strategies for its negative impact, and plan to harness its potential. The question that remains is: what can we learn about electoral use-cases of AI and their implications to ensure that they satisfy essential democratic values?  

Even before AI entered everyday conversation, EMBs were already implementing its forerunners to electoral administration. Automated systems were quietly working behind the scenes, flagging anomalies in voter rolls, safeguarding online portals from phishing attacks, and digitizing identity verification through the implementation of biometric systems. These supporting technologies paved the way for the current wave of AI integration, although there is still substantial progress to be made in fully realizing the potential of electoral technology and AI.

Recent AI advancements have expanded EMB’s options to automate parts across the entire electoral cycle. According to an International IDEA survey, EMBs in Africa, Asia-Pacific and the Balkans already report using AI for multilingual voter education through generative chatbots, voter authentication, social media monitoring and data analysis, and election results management.  While its uses are still limited, many EMBs express high interest in expanding the use of AI in their work. While AI offers the potential for more accessible, efficient, and responsive elections, it also carries associated risks which must be carefully assessed.

As outlined in IDEA’s AI for Electoral Management report, successful AI integration requires robust digital foundations, including internal infrastructure, staff digital literacy, strong cybersecurity protocols, and inclusive datasets. Deploying AI models without these prerequisites significantly raises the risk of failure or harm, and many EMBs still have a long way to go in establishing the necessary conditions to implement AI safely and productively. Before embracing AI, electoral institutions must first assess whether essential preceding steps have already been completed. From training staff members on how AI functions and its limitations, to establishing standard operating procedures for cybersecurity breaches, and auditing voter data for gaps or vulnerabilities.

Beyond the technical, EMBs must also undertake risk analyses for each AI use case, as some applications are inherently riskier than others. For instance, using AI for signature matching is generally more predictable than deploying EMB chatbots trained on flawed data, which may unintentionally respond to voter queries with misleading information about the election. Fortunately, frameworks like the EU AI Act offer useful guidance for categorizing AI tools by risk level, distinguishing high-risk applications (e.g., facial recognition for voter ID) from lower-risk tools (e.g., limited FAQ chatbots). Encouraging the context-sensitive use of these technologies can help safeguard democratic integrity while leveraging the benefits of innovation.

Pillar #3, AI to improve electoral management

At the third regional workshop hosted in Johannesburg, South Africa, anglophone EMBs and civil society organizations from across the continent engaged in contemplative conversations about the current and potential future roles of AI in their operations. A recurring and essential question emerged: when does using AI solve a problem or improve a process?

This question prompted deeper reflection on the appropriate use of AI and when traditional methods may still be the better choice. Amid the rapid rise of AI, there has been little time to reflect on its growing implications. While the technology has immense potential, it is important to remember that it is not a panacea. Sometimes, existing tools or simpler solutions are better suited to the context, particularly when digital infrastructure is limited, data is incomplete, its risks are not possible to mitigate, or the use-case requires human judgement that technology can’t yet reliably replicate.

This reflection is imperative for EMBs, which are frequently approached by commercial actors offering AI solutions as one-size-fits-all products. These solutions are often conceived with insufficient understanding of local electoral environments, overlooking or misjudging the specific needs and challenges of each context. Tools that perform well in test environments may underperform or even cause harm when deployed at scale, particularly in low-resource or multilingual contexts.

In a case study on AI vendors during the workshop, participants emphasized the need to internally assess whether the proposed technology addresses a genuine need, and thoroughly vetting service providers for demonstrable experience with elections. Before deployment, participants also emphasized the importance of staying informed about the reliability and limits of any potential system, if the service includes human oversight and validation, and whether it complies with local data protection and management laws.

Healthy skepticism of AI emerged as a common theme during the workshop. While participants acknowledged AI’s potential to enhance electoral administration, they also voiced considerations about institutional readiness. Several EMBs were already conducting internal needs-analyses or investing in building digital capacity, but many still felt they were ill-equipped to respond to the emergence of AI—particularly regarding its use by non-EMB actors seeking to influence elections.  

Participants raised several instances where they had encountered attempts to destabilize the integrity of their electoral environments, and several expressed concern about their limited capacity to counter these threats alone. This highlights the dual challenge for EMBs: not only must they consider how to safely deploy AI tools internally, but they must also develop strategies to defend against disruptive uses of the same technology by others. In this respect, civil society is a key vanguard to support EMBs in identifying and mitigating the influence of electoral interference by external actors.

The strength of cross-sectoral multistakeholder networks to respond to these challenges was echoed throughout the workshop. By connecting the capacities and diverse expertise of civil society, academia, tech experts and the media—EMBs can build more resilient systems to counteract harm against electoral integrity. Such collective efforts help distribute the responsibility of safeguarding electoral integrity while also fostering transparency, trust, and public accountability, cornerstones of a democratic foundation for electoral AI.

Ultimately, the stakes are high. As AI becomes more deeply embedded throughout the electoral cycle, so too do the associated risks grow as well. Yet the opportunities to support fairer, more inclusive, and more efficient elections are equally significant. Striking the right balance requires ongoing learning, open dialogue, and a strong commitment to ethical, and context-sensitive innovation.

The workshop in Johannesburg made it clear: African EMBs are not passively adopting AI, they are actively shaping its use through thoughtful reflection, collaboration, and vigilance--setting a global example. The path ahead requires a multistakeholder approach to develop institutional AI frameworks with identifiable mechanisms for accountability and transparency. As one participant aptly put it: “The currency of EMBs is trust. If trust is lost, it breaks the process.”

The upcoming fourth installment of the article series will tackle the next pillar of democratic electoral AI: the evolving landscape of AI regulation and legislation. It will examine how governance frameworks must be context-sensitive and grounded in human rights to effectively address electoral AI. The discussion will draw on insights from the fourth AI for Electoral Actors workshop, which convened representatives from Latin America in Panama City during late May 2025.

 

Catch up on the article series so far by reading the first article to get the full picture of a democratic AI foundation, the second article on AI Literacy and the workshop in Kuala Lumpur, and the third article on AI ethics and the workshop in Tirana.

About the authors

Cecilia Hammar
Programme Assistant, Digitalization and Democracy
Enzo Martino - Intern
Enzo Martino
Intern at Digitalization and Democracy Team
Close tooltip