Back to overview

What to Expect: Digital Technology and Democracy in Europe in 2020

February 13, 2020 • By Thomas Heinmaa
Image Credit: Geralt@Pixabay
Disclaimer: Views expressed in this commentary are those of the staff member. This commentary is independent of specific national or political interests. Views expressed do not necessarily represent the institutional position of International IDEA, its Board of Advisers or its Council of Member States.

 

The issue of digital privacy is increasingly on people’s minds: both in terms of the impact of our personal data in elections and in its use in the future of Artificial Intelligence (AI). While the previous European Commission made significant progress in passing the General Data Protection Regulation (GDPR), the new Commission is investing in publishing ethical AI principles. Quite unexpectedly, this could come in use in the Commission’s attempt to prevent democratic backsliding among its member states and beyond.

Microtargeting, a technique at the intersection of AI and data protection, is increasingly used in political campaigning. When applied in elections, the method uses personal data to campaign directly to individual voters, enabling political parties and candidates to tailor advertisements to individual voters much more precisely than before. While there are many possible benefits for party communication, such as providing more relevant information to voters, there are significant drawbacks, particularly in terms of its impact on political discourse. Public debate is an essential foundation of democracy that is put at risk through misuse of digital microtargeting: Sophisticated algorithms segment populations and limit the messages they see, enabling for dissemination of disinformation and polarising messages without public scrutiny. Voters lose the chance to evaluate and discuss party information, and can even lose the freedom of forming their own opinion. Such issues of privacy and public discourse were brought to the forefront of public debate with the use of Cambridge Analytica by the Brexit campaign, whereby large amounts of personal data were harvested from social media accounts for campaigning purposes.

Personal data use is sure to impact important upcoming elections in Europe: while all eyes are on the significant 2020 US Presidential Elections, this year Presidential elections will also take place in Poland, and parliamentary in Romania. In Poland, the ruling Law and Justice Party (PiS) last year secured another majority in the lower house, but lost control of the senate to the opposition. PiS has been increasingly active online, with nationalist and populist groups in Poland active in the use of online bot accounts and fake social media accounts. Given this result and the importance of the President’s cooperation in adopting the internationally criticised rule of law reforms pursued by PiS, more effort is likely to be put into online political advertising.

Elections in Romania follow substantial losses by the previously ruling Social Democratic Party (PSD), giving the now governing National Liberal Party (PNL) a chance to strengthen its mandate. Loopholes were identified in Romania’s application of the GDPR, most notably that it allows political parties to process personal data from special categories without explicit consent and without effective safeguards. Special categories are those data that could significantly impact an individual’s rights, such as race and ethnic origin, religious beliefs, political opinions, or health data. In previous elections, disinformation was disseminated over social media that sought to create fears of new taxes and pension cuts. This time, the ability to target individuals on deeply personal matters could create opportunities for manipulation.

However, the EU is making decisive moves to better regulate personal data use: on the back of the achievements of the GDPR, the European Commission has increasingly focussed on developing ethical AI. The AI fuels the use of microtargeting by using increasingly sophisticated algorithms to process large amounts and make decisions based on large amounts of personal data. Publications released across the course of 2018 and 2019 set the first guidelines and recommendations for the ethical use of AI. While the principles set out in the publications are not legally binding, they have been translated into voluntary engagement and numerous public initiatives throughout the region. The EU worked with voluntary stakeholders in the second half of 2019 to pilot the new rules, with a revised version of the ethical assessment guidelines expected later this year. The work done has created the foundations for the current debate on AI to shift towards a human-centric approach to AI governance.

The focus of these guidelines is “on ensuring sustainability, growth, competitiveness and inclusion while empowering, benefiting and protecting individuals”. The EU is expected to continue to increase its investments into research on the ethical use of AI over the next decade. Since 2018, the EU has increased efforts to prevent or slow the democratic backsliding observed in several of its Member States. Given that their work will create limitations for the use of AI tools to manipulate voters and spread disinformation, the standards set over the course of 2020 have the potential to contribute to a powerful future instrument for upholding democratic values.

About the authors

Thomas Heinmaa
Research Assistant
Close tooltip