2024 was the year artificial intelligence invaded en masse into electoral contests around the world. Deepfake videos of politicians, AI-generated fake news, synthetic voice messages, and algorithmic micro-targeting of voters fundamentally altered the way democratic processes are conducted — and undermined. From the US to India, from France to Indonesia, no electoral system remained unaffected.
Deepfakes: The New Electoral Threat
Deepfakes — videos, images, and audio created or modified with AI — are now the top threat to elections. Unlike earlier disinformation techniques, deepfakes require minimal technical skills and can spread instantly through social media.
According to research published in Nature Communications in July 2025, LLM-generated messages were rated as more persuasive than human-written ones on political issues — scoring 2.91 versus 2.80 — because they were more technical and analytical.
World Map: AI in Elections 2023-2025
No country was spared. Here are the most striking examples by continent.
🇺🇸 USA 2024: The Artificial Intelligence Election
According to ODNI and FBI officials, Russia, Iran, and China used generative AI tools to create fake text, images, video, and audio aimed at fostering anti-Americanism. AI was described as an “accelerant” rather than a revolutionary change in influence efforts.
The Donald Trump campaign used deepfake videos of political opponents and fake images showing Trump with Black supporters. A consultant for Dean Phillips admitted to using an AI robocall with Biden's voice to discourage voters in New Hampshire.
🇮🇳 India 2024: Dead Politicians Revived
In the 2024 Indian elections, parties used deepfakes of deceased politicians. Muthuvel Karunanidhi, who died in 2018, “appeared” in campaign videos, while Jayalalithaa, who died in 2016, “spoke” in an audio clip. Meanwhile, AI was used positively — for real-time speech translation across dozens of Indian languages.
🇫🇷 France 2024: Fake Families
In the French elections, deepfake videos showed young women as “nieces” of Marine Le Pen — but the people didn't actually exist. The videos gathered over 2 million views. In another deepfake, a fake France24 broadcast reported that Ukraine had tried to assassinate Macron.
🇮🇩 Indonesia 2024: The Adorable Strongman
Prabowo Subianto made extensive use of AI-generated art in his campaign — images of himself as an adorable child and cartoonish avatars — to soften his harsh image. Indonesia's Child Protection Commission condemned the ads as misuse. All presidential candidates were targeted with deepfakes.
🌍 Rest of the World
Election Deepfakes 2023-2025
- Argentina 2023: Milei's team spread AI images of rival Massa — 3 million views
- Pakistan 2024: Imprisoned Imran Khan used AI voice at virtual rally
- Bangladesh 2024: Deepfake videos of female opposition politicians in compromising scenarios
- Ghana 2024: 171 fake accounts with ChatGPT-generated posts — first clandestine partisan AI network
- South Africa 2024: Fake videos of Biden, Trump and Eminem endorsing parties — 158,000+ views
- Taiwan 2024: Deepfake Xi Jinping supporting candidates, fake US congressman Rob Wittman
- South Korea 2024: 129 deepfake violations in 2 weeks before elections
- UK 2024: Deepfake Sunak conscripting 18-year-olds (400,000+ views), deepfake Starmer
- Canada 2025: China and Russia expected to use AI disinformation against voters
AI Micro-targeting: The Invisible Manipulator
Beyond deepfakes, AI is being used for voter micro-targeting at unprecedented scale. Large Language Models can create personalized messages for each individual voter based on demographics, online behavior, and political preferences.
According to research in PNAS Nexus (January 2024), mass production of microtargeted messages via LLMs does not violate OpenAI's terms of service, as rephrasing messages isn't considered abuse. This means politicians have a mass persuasion tool without meaningful restrictions.
Generative AI also boosts fundraising efficiency — analyzing donor data, identifying potential funders, and creating targeted fundraising content.
Foreign Interference: Russia, China, Iran
AI has become a central tool for foreign election interference:
Russia
The most active nation targeting US 2024 elections. Spreads synthetic images, video, audio, and text online. The Russian government used deepfake video of Moldova's president Maia Sandu appearing to “support” a pro-Russian party.
China
Uses “broader influence operations” amplifying divisive US issues — immigration, drugs, abortion. Spamouflage increasingly uses generative AI. AI news anchors deliver Chinese propaganda.
Iran
Creates fake social media posts targeting "across the political spectrum with polarizing issues during presidential elections."
Slovak officials faced fake audio clips of a liberal party leader discussing “vote rigging and raising the price of beer.” In Taiwan, a fake video of a US congressman promised increased military support if the ruling party's candidates won.
Softfakes: The Most Insidious Form
Beyond obvious deepfakes, "softfakes" are proliferating — images, video, or audio lightly edited, often by campaign teams themselves, to make a candidate more appealing. It's not “fake” in the classic sense, but it remains manipulation.
Case in point: in Indonesia's elections, the winner created and promoted cartoonish avatars of himself to rebrand his public image — a new form of political branding through AI.
"A lot of the questions we're asking about AI are the same questions we've asked about rhetoric and persuasion for thousands of years."
— George Washington University, research on AI in political campaigns, 2024Regulation: Who's Fighting Back?
AI regulation in elections moves slowly — but some countries are leading the way:
AI & Election Legislation
| Country/State | Measure | Status |
|---|---|---|
| 🇵🇭 Philippines | Mandatory AI disclosure in campaign materials (COMELEC) | Active 2025 |
| 🇺🇸 Oregon | SB 1571: Required AI disclosure in campaign communications | Law 2024 |
| 🇺🇸 California | Deepfake ban against political opponents within 60 days of elections | Law 2024 |
| 🇪🇺 EU AI Act | Classification of AI as high-risk in electoral contexts | In effect |
| 🇺🇸 Federal | No federal rules for AI in elections yet | Pending |
Self-Regulation by AI Companies
Midjourney blocks generation of images of presidential candidates. However, research by the Center for Countering Digital Hate found that image generators like Midjourney, ChatGPT Plus, DreamStudio, and Microsoft Image Creator produce election disinformation in 41% of test prompts. OpenAI implemented digital credentials for image origin and an AI detection classifier.
How to Protect Yourself as a Voter
1. Check the Source
Don't share political videos or images without verifying they come from a reliable outlet. If it “seems too good” or “too bad” to be true, it probably isn't.
2. Watch for Signs
Unnatural lip movements, strange backgrounds, inconsistent lighting, distorted fingers — deepfakes still leave traces.
3. Multiple Sources
If a politician's “statement” appears only on social media and nowhere on reliable news outlets, it's almost certainly fake.
4. Use Detection Tools
Services like the Deepfake Analysis Unit (India) or AI detection tools help identify manipulated content.
"AI Steve": When AI Becomes the Candidate
In the 2024 UK elections, entrepreneur Steve Endacott created "AI Steve" — an AI avatar as the face of his parliamentary candidacy. In South Korea in 2022, presidential candidate Yoon Suk Yeol launched "AI Yoon Seok-yeol", an avatar that campaigned in places where the candidate couldn't go in person.
The question arises: if a politician is represented by AI, are we voting for the person or the algorithm?
The Ethical Dimension
AI use in elections raises profound ethical questions:
- Trust: How do we maintain public trust in democracy when nothing can be verified as authentic?
- Mental security: According to academics, AI proliferation in campaigns creates enormous pressures on voters' “mental security”
- Borders: AI combined with globalization enables more “universalized” content that transcends national boundaries
- Free expression: AI-powered online platforms can significantly impact freedom of expression
- Democratic reasoning: AI can “collide” with people's reasoning processes, creating “dangerous behaviors” that undermine critical levels of society
What 2026 and Beyond Will Bring
The 2024 elections were just the beginning. By 2026:
What to Expect
- More realistic deepfakes: Text-to-video models (Sora, Runway) make it nearly impossible to distinguish fake from real
- Real-time AI manipulation: Ability to alter live video feeds during broadcast
- AI-powered bot armies: Autonomous networks flooding social media with targeted propaganda
- Stricter legislation: More countries will follow the Philippines/California examples
- Digital identity: Potential adoption of cryptographic verification for authentic political content
- Detection AI: Evolving deepfake detection tools — but always one step behind
Conclusion
AI is no longer a future threat to democracy — it's today's reality. From realistic deepfakes to mass micro-targeting, artificial intelligence is reshaping how politicians communicate, nations interfere, and voters decide. The solution isn't banning the technology — but transparency, citizen education, and legislation that moves at least as fast as the technology itself.
