Categories
Cybersecurity

Disinformation & AI: The influence of artificial intelligence on elections

In recent years, artificial intelligence (AI) has made enormous progress and developed into a powerful tool that can have both positive and negative effects on democratic processes. While AI has the potential to strengthen democracy and promote political participation, it also poses significant risks, especially in the context of elections.

Influencing elections through AI – not science fiction

The use of artificial intelligence (AI) has increased significantly in election campaigns worldwide in recent years. In particular, deepfakes and vote cloning are used to promote candidates, attack political opponents or defame specific candidates in the interests of foreign actors. Such manipulative content is often disseminated shortly before election days in order to limit the ability of fact-checkers to respond.

The use of audio deepfakes is particularly problematic, as they are more difficult to detect and can deceive users. The rapid spread of these technologies and inadequate regulation in many countries pose a serious threat to the integrity of electoral processes. As a result, comprehensive measures to detect and combat such disinformation are necessary to preserve trust in democratic institutions.

Opportunities for AI in elections

The use of AI can improve political discussions and encourage wider political participation: AI can help increase voter participation by informing voters about electoral processes and helping them make informed decisions. It can also analyze large amounts of data to identify trends and potential irregularities, enabling election authorities to make data-driven decisions and ensure the integrity of the electoral process. AI can also strengthen cyber security by detecting and responding to threats early and protecting sensitive election data:

  1. Increased voter participation and engagement: AI-powered chatbots and virtual assistants can inform voters about electoral processes, answer questions and even help with decision-making. This can increase voter participation and ensure that citizens are well informed.
  2. Data analysis and decision making: AI algorithms can analyze large amounts of data to identify voter trends and detect potential irregularities. This helps electoral authorities to make data-based decisions and ensure the integrity of the electoral process.
  3. Security and cyber protection: AI can be used to strengthen cyber security measures, detect and respond to threats and protect sensitive election data. This is particularly important to prevent tampering and cyber-attacks on election infrastructure.
  4. Promoting political discourse: AI can improve political discourse by managing massive political conversations in chat rooms, summarizing participants’ opinions, identifying tensions and moderating them.

Risks of AI in elections

The use of AI in democratic elections also harbors considerable risks. AI can be used to spread disinformation and manipulation through deepfakes and fake audio files, which undermines trust in the electoral process. It also enables precise micro-targeting that can manipulate voter groups and increases the risk of cyber-attacks on election infrastructure. These risks require comprehensive protective measures to ensure the integrity of elections:

  1. Disinformation and manipulation: AI can be used to spread misinformation and disinformation. For example, deepfake videos can be created that look deceptively real and spread false information. Such content can undermine trust in the information and destabilize the election process.
  2. Micro-targeting and polarization: By analyzing social networks, AI can target specific groups of voters very precisely. This can lead to the manipulation of voter groups by providing them with targeted information that confirms and reinforces their existing views, leading to further polarization of society.
  3. Automated manipulation: AI can be used to run astroturfing campaigns, where a small group simulates a false grassroots movement to influence public opinion. This can lead to politicians receiving false views of public opinion and acting accordingly.
  4. Vulnerabilities and cyberattacks: AI systems themselves can be vulnerable to cyberattacks, which can lead to manipulation of election results or data leaks. Adversarial attacks, in which the training data of AI systems is manipulated, are another risk that can jeopardize the integrity of the electoral process.

Risk mitigation measures

What can be done to identify or mitigate these risks?

  1. Detection of AI-generated content: To prevent the spread of AI-generated misinformation, automated detection tools such as GPTZero, OpenAI’s Classifier and DetectGPT can be used. However, these tools are not completely reliable and can be circumvented.
  2. Watermarking and metadata labeling: Another technique for labeling AI-generated content is the insertion of watermarks or the marking of metadata. For example, Google can add a note under AI-generated images to indicate that the content was created by an AI.
  3. Legal framework: The EU is currently adapting its legal framework to address the dangers posed by AI and promote the use of trustworthy, transparent and responsible AI systems. This includes the Digital Services Act and the proposed AI Act, which sets specific regulatory requirements for high-risk AI systems.

Main election targets through AI-generated content

AI-generated images

AI-generated images have already been disseminated in various (EU) countries to promote disinformation. Examples include manipulated images of protests, misrepresentations of politicians and emotionally charged images designed to provoke negative reactions. These images are currently still often recognizable due to unrealistic details, which helps experts and users to expose them. Although the quality of the images is currently still relatively easy to recognize, this is likely to change in the near future as the technology continues to advance

AI-generated audio

AI-generated audio is generally considered to be arguably the biggest threat at the moment, as it is very difficult to recognize this content as artificial. Examples include fake phone calls between politicians discussing election meddling or fake messages from celebrities. These audios can be widely distributed and are often difficult to debunk, especially for average users. The ability to create and distribute these audios can be used to discredit politicians, promote conspiracy theories and undermine democratic participation

AI-generated videos

The technology for creating AI-generated videos is developing rapidly, although the most advanced tools are not yet publicly available. So far, most AI-generated videos are not realistic enough to pass as real videos. However, AI-generated videos where the audio content is altered and the video is only slightly adjusted (e.g. lip movements and facial expressions synchronized) are expected to pose a greater threat. Such videos can be used to spread false information and undermine trust in the democratic process

EU regulations to protect against disinformation

The European Union has expanded its legal framework to better address the risks posed by artificial intelligence (AI) and to promote the use of trustworthy, transparent and responsible AI systems. The legal measures outlined below illustrate that the EU has recognized the need to carefully regulate the use of AI in the context of elections and at the same time exploit its positive potential. They are intended to ensure that both transparency and responsibility are guaranteed in the use of AI systems in order to protect and strengthen democratic processes.

A key initiative in this context is the Digital Services Act (DSA), which came into force in November 2022. This piece of legislation requires very large online platforms to take a risk-based approach and conduct independent audits of their risk management systems to prevent misuse of their systems, such as through disinformation. In addition, these platforms must take measures to mitigate the risks identified, including moderating the content displayed on their platforms. The DSA also aims to increase transparency in relation to political advertising by prohibiting both the targeting of minors and the use of sensitive personal data for targeted advertising.

The EU Commission has published the first DSA guidelines for elections. These guidelines aim to formulate clear expectations of large online platforms and search engines in order to translate their obligations into concrete measures and thus protect democratic processes.

Another important part of the EU’s legal framework is the Code of Practice on Disinformation. This voluntary commitment by the industry, which includes major online platforms, was launched in 2018 and revised in 2022. The code aims to strengthen transparency in political advertising and empower users to recognize and report disinformation. It calls for the introduction of stronger transparency measures for political advertising, including more efficient labeling and the provision of information on sponsors and spending on advertising.

Furthermore, the AI Act that has now been passed will play a central role in regulating the risks arising from the use of AI. This regulation takes a risk-based approach and sets out specific regulatory requirements for high-risk AI systems. In the European Parliament’s position on the proposed AI Act, AI systems used to influence voters in political campaigns were classified as high-risk AI systems. In addition, providers of generative AI models must ensure that their systems are robust against the generation of illegal content and make it clear when content has been generated by an AI and does not originate from humans.

UNDERSTANDING DISINFORMATION COMPREHENSIVELY

Disinformation is not a purely digital issue!

Disinformation is no longer just a digital phenomenon limited to social media and manipulated news. The targeted influencing of public opinion by foreign actors is also increasingly manifesting itself in physical acts of sabotage, which are deliberately intended to steer media and social debates in the wrong direction. The recent series of sabotages in Germany, in which suspected Russian actors paralyzed hundreds of cars with construction foam and created the appearance of a radical climate protest, shows how disinformation is brought into the physical world through real acts.

This strategy combines classic espionage and deception techniques with modern narratives. While fake news and deepfakes were once mainly used to manipulate opinion, today’s actors rely on hybrid methods: a mixture of digital campaigns and real disruptive actions. The aim is to deepen existing social divisions and sow political distrust. In Germany, the attacks were apparently intended to fuel public resentment towards the Greens and their candidate for chancellor – a perfidious way of exerting political influence.

The German Federal Constitutional Court (1 BvR 1743/16, 1 BvR 2539/16) has already determined that cyber attacks on critical infrastructure can pose a similar threat to armed attacks. If disinformation campaigns are also supplemented by physical acts of sabotage, this threat situation is exacerbated. Companies, authorities and political institutions must therefore learn to understand disinformation not just as a problem of the digital world, but as a multi-layered challenge that has real consequences and must be combated accordingly.

Overall impression of the influence of AI on elections

AI has the potential to significantly improve the democratic process, but also to jeopardize it. It is crucial that appropriate safeguards and transparency rules are implemented to maximize the benefits of AI and minimize its risks. This is the only way to ensure that elections are fair and secure and that citizens’ trust in democracy is strengthened. A recent communication from OPENAI shows just how important this is:

OpenAI says it has uncovered and stopped five covert influence operations (IOs) in the last three months that attempted to use its models for deceptive activities online. These operations included:

  1. Bad Grammar: A previously unreported Russian operation on Telegram targeting Ukraine, Moldova, the Baltic states and the US. The actors used the models to debug code for Telegram bots and to create political commentary.
  2. Doppelgänger: A Russian operation that generated comments in various languages (including English, French and German) and published them on platforms such as X and 9GAG. They also translated and edited articles published on related websites.
  3. Spamouflage: A Chinese network that researched public social media activities and generated texts in multiple languages that were published on platforms such as X, Medium and Blogspot. They also debug code for database and website management.
  4. International Union of Virtual Media (IUVM): An Iranian operation that generated long articles, headlines and website tags and published them on an associated website.
  5. Zero Zeno: An operation by an Israeli company that generated comments and articles and published them on various platforms such as Instagram and Facebook.

The campaigns revealed targeted issues such as the Russian invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the US, and criticism of the Chinese government. Despite their efforts, these operations did not achieve significant reach or authentic engagement through the use of OpenAI services.


EU measures

The need for a coordinated and comprehensive EU response to hybrid threats, particularly in the area of disinformation during elections, is obvious. The EU has therefore developed a variety of tools to detect and respond to these threats at an early stage.

By introducing Hybrid Rapid Response Teams and continuously improving the EU Hybrid Toolbox, the EU is demonstrating its determination to protect the integrity of democratic processes and strengthen resilience against disinformation. Cooperation with international partners and the use of new technologies are key elements of the EU strategy.

EU Hybrid Toolbox

Back in 2022, the Council of the European Union presented a framework for a coordinated EU response to hybrid campaigns. This framework, known as the “EU Hybrid Toolbox”, aims to identify hybrid threats at an early stage and respond with a variety of instruments. These include preventive, cooperative, stability-promoting and restrictive measures. The toolbox also includes the “Foreign Information Manipulation and Interference Toolbox” (FIMI), which is specifically aimed at detecting, analyzing and responding to disinformation.

The focus is on increasing the resilience of the EU and its Member States by tackling hybrid threats through a holistic approach. This includes improving cybersecurity through the NIS-2 Directive and the Critical Entities Resilience Directive, as well as implementing the Transparency and Targeted Political Advertising Regulation, the Digital Services Act (DSA) and the Anti-Coercion Instrument (ACI). In addition, cooperation with international partners, such as NATO, is emphasized in order to effectively combat hybrid threats.

Hybrid Rapid Response Teams

In 2024, these measures were further expanded with the introduction of the Hybrid Rapid Response Teams. These teams are a central element of the EU Hybrid Toolbox and are designed to support Member States and partner countries in countering hybrid threats. The Rapid Response Teams provide tailored and targeted short-term support, particularly in the fight against disinformation campaigns and cyber attacks.

The EU’s strategic direction for combating hybrid threats was set out in the “Strategic Compass”, which provides for the development of new instruments and the strengthening of existing measures. This includes the systematic collection and analysis of data on disinformation incidents and the promotion of international cooperation to create standards to counter hybrid threats.

Take a number: Who is lining up to influence elections?

Among the most significant international actors attempting to influence elections in Europe are state actors from Russia, China and Iran. These countries use various tactics to promote their geopolitical interests and undermine the stability of European democracies.

In addition to the main actors named below, there are also other countries and non-state actors that attempt to influence elections in Europe. These include, for example, groups acting on behalf of governments or in their own interests to advance certain political agendas. These actors use a variety of methods, including cyberattacks, disinformation, economic pressure and diplomatic maneuvers to achieve their goals. The European Union and its Member States face the challenge of recognizing and countering these threats in order to protect the integrity of their democratic processes.

Russia

Russia is known for its extensive disinformation campaigns and cyberattacks aimed at weakening trust in democratic processes. Some of the best-known examples include influencing the 2016 US elections and attempts to influence the Brexit vote. Russian actors often use social media platforms to spread false information and deepen social divisions.

China

China is increasingly relying on cyberattacks and disinformation campaigns to expand its influence in Europe. Chinese hacker groups are known for conducting industrial espionage and stealing sensitive information that can then be used to influence political decisions. China is also trying to manipulate public opinion in Europe by spreading pro-Chinese narratives in the media.

Iran

Iranian actors also use disinformation campaigns and cyberattacks to pursue their geopolitical goals. These campaigns are often aimed at destabilizing the policies of the US and its allies in Europe. Iranian hacker groups use similar techniques to their Russian and Chinese counterparts.

North Korea

North Korea is another international actor attempting to influence elections and political processes worldwide, including in Europe, through cyber activities. While North Korea is less of a focus compared to Russia, China and Iran, there is still significant activity emanating from North Korean actors. North Korea also uses disinformation to further its geopolitical goals and foment political unrest. While there are fewer documented cases of direct election interference by North Korea, the regime still uses cyber operations to exert political pressure and protect its interests, for example by publishing compromising information about political candidates or spreading propaganda.


Code of Practice on Disinformation

The Code of Practice on Disinformation has been signed by major online platforms such as Google, Meta, Microsoft and TikTok. The aim is to combat disinformation, particularly in view of the upcoming European elections in June. The measures already implemented or planned include

  • Labeling of digitally created or modified content: Advertisements and content that has been digitally edited must be clearly labeled.
  • Collaboration with fact-checking organizations: Platforms work with independent fact-checking organizations to reduce the spread of disinformation.
  • Promotion of high-quality information: Authoritative and trustworthy information is increasingly passed on to users.
  • Media literacy and prevention campaigns: Targeted media literacy campaigns and prevention work are carried out to raise awareness of disinformation.

Outlook

  • The reports can be viewed at the Transparency Center (https://disinfocode.eu/).
  • The EU Commission plans to recognize the Code of Practice as a code of conduct under the Digital Services Act.
  • Future reports are expected to document and improve the ongoing efforts to secure the elections.

Technical security of EU elections

The EU has taken various measures to make elections technically more secure and to strengthen resilience to cyber threats. These measures include both preventive and reactive strategies, which are described in detail in an updated compendium on electoral cybersecurity and resilience.

Threat landscape and cyber security

The threat landscape has intensified since the last EU elections in 2019. The current threats include:

  • Ransomware and wiperware attacks: These can paralyze voting systems by encrypting or deleting important data.
  • DDoS attacks: These attacks can disrupt the transmission and display of election results and undermine public confidence in the electoral process.
  • Phishing and social engineering: These methods allow access to sensitive information that can be used for disinformation campaigns.
  • Supply chain attacks: These affect the entire election infrastructure by carrying out attacks via trusted third-party providers.
  • Manipulated information: Cyberattacks can be used to spread fake information and shake confidence in the electoral process

Technological advances and measures

Electoral processes have made technological advances, which also require new security measures. These include:

  • Information exchange and cooperation: The establishment of national election networks that include all relevant actors in order to identify and combat threats at an early stage.
  • Awareness and training: Cyber threat awareness campaigns and training for all stakeholders, including election officials, IT staff and candidates.
  • Risk management and crisis management: Conducting risk analyses and creating crisis management plans, including preparation for large-scale incidents.
  • Exercises and training: conducting scenario-based exercises and technical training to test and improve cyber incident response capabilities

Specific measures for EU elections

The elections to the European Parliament require special security precautions due to their cross-border nature. These include:

  • Ensuring the confidentiality and integrity of election data: Implementation of security concepts for the secure transmission of provisional election results.
  • Enhanced cooperation and information exchange: Use of networks such as the European Cooperation Network on Elections (ECNE) and the EU Cyber Crisis Liaison Organization Network (EU-CyCLONe) to coordinate and manage cyber incidents.
  • Application of new technologies: using AI to improve security and detect cyber threats and support voter interaction.

German Lawyer Jens Ferner (Criminal Defense & IT-Law)
Latest posts by German Lawyer Jens Ferner (Criminal Defense & IT-Law) (see all)