14. January 2026 / Mehić

Summary
Artificial intelligence is increasingly shaping political communication and democratic processes. Generative AI systems such as deepfakes, automated text generation, and AI-supported political campaigning are transforming how political information is produced, disseminated, and perceived. The screencast “AI and Political Dis- and Misinformation” examined these developments from a comparative perspective, combining current research, international case studies, and exploratory empirical observations.
This landing page summarizes the key arguments and insights presented in the screencast and discussed during the session.
Screencast
The following screencast provides an overview of current research on AI-driven political misinformation, comparative case studies from different national contexts, and exploratory empirical findings discussed during the conference.
▶ Access the screencast (YouTube)
Research Context: AI, Politics and Misinformation
Drawing on recent research in political science and media studies, the screencast situated AI-driven misinformation within broader sociopolitical debates about power, manipulation, and democratic stability. Scholars emphasize that AI does not merely accelerate existing forms of disinformation, but qualitatively transforms political propaganda by increasing its scale, personalization, and plausibility (Gaborit, 2024; Romanishyn et al., 2025).
Particular concern has been raised about the capacity of generative AI to undermine trust in political information ecosystems, especially during election periods. Public anxieties around AI and misinformation are often shaped not only by concrete incidents, but also by media narratives and broader fears about democratic erosion (Yan et al., 2025).
Comparative Case Studies
To ground these debates empirically, the screencast presented three case studies focusing on recent electoral contexts in the United States, India, and Germany.
United States
The U.S. case focused on the 2024 presidential election and the origins of public concern about AI “supercharging” political misinformation. Research suggests that fears of AI-driven manipulation were strongly influenced by public discourse and media coverage, even in cases where documented AI misuse remained limited (Yan et al., 2025). This highlights the importance of perception, trust, and anticipatory regulation in democratic contexts.
India
The Indian case demonstrated more direct forms of AI misuse in political communication. Generative AI tools, including deepfakes and manipulated audiovisual content, were actively deployed during the 2024 elections, contributing to misinformation, voter confusion, and political polarization. These developments illustrate both the technological possibilities and democratic risks of AI in highly mediated political environments (Dhanuraj et al., 2024).
Germany
The German case examined AI-based voter information tools developed ahead of the 2025 federal elections. Although intended to provide neutral political guidance, these systems sometimes produced biased or misleading outputs. This case serves as a cautionary example of how “neutrally” informative AI tools can unintentionally become sources of political misinformation, raising questions about transparency, accountability, and design assumptions (Dormuth et al., 2025).
Together, the case studies show that AI-driven political misinformation manifests differently across national contexts, but consistently challenges democratic trust and decision-making.
Exploratory Survey Insights
In addition to the case studies, the screencast presented findings from an exploratory online survey conducted in November 2025 with 108 participants from India, the United States, and Germany. The survey examined awareness of AI-generated political content, experiences with political misinformation, trust in political information on social media, and attitudes toward regulation and responsibility.
Across all three countries, respondents reported frequent exposure to political misinformation and relatively low confidence in their ability to identify AI-generated fake content. Trust in political information shared on social media platforms was generally low. These findings are tentative and illustrative, and primarily serve to contextualize the case studies rather than to provide representative conclusions.
Discussion and Ethical Implications
The discussion following the screencast addressed key normative and practical tensions. A central debate concerned prevention versus detection: whether democratic responses should focus on restricting and labeling AI-generated political content or prioritize detection mechanisms and citizen awareness.
Closely related was the question of regulation and education. Participants emphasized that regulation and AI literacy should not be understood as mutually exclusive, but rather as complementary strategies. Given the increasing sophistication of generative AI systems, even digitally literate users remain vulnerable, underscoring the need for combined policy and educational approaches.
Further discussions addressed platform and developer responsibility, the risk of bias in AI systems, and the limits of existing legal frameworks in addressing AI-driven electoral manipulation.
Conclusion
The session and screencast demonstrated that AI-driven political misinformation represents a serious and evolving challenge for democratic societies. While national contexts differ, the underlying issues of trust, transparency, and accountability are shared across political systems.
Addressing these challenges requires interdisciplinary responses that combine regulatory frameworks, responsible AI design, platform accountability, and public AI literacy. As generative AI continues to develop, proactive and ethically informed strategies will be essential to safeguard democratic communication.
References
Dhanuraj, D., Harilal, S., & Solomon, N. (2024). Generative AI and its influence on India’s 2024 elections: Prospects and challenges in the democratic process. Friedrich Naumann Foundation for Freedom.
Dormuth, I., Franke, S., Hafer, M., Katzke, T., Marx, A., Müller, E., & Rutinowski, J. (2025). A cautionary tale about “neutrally” informative AI tools ahead of the 2025 federal elections in Germany. In Proceedings of the World Conference on Explainable Artificial Intelligence (pp. 64–85). Springer Nature Switzerland.
Gaborit, P. (2024). A sociopolitical approach to disinformation and AI: Concerns, responses and challenges. Journal of Political Science and International Relations, 7(4), 75–88.
kjpargeter. (n.d.). 3D Urnen Wahltag übertragen von [Image]. Freepik. Retrieved from: https://de.freepik.com/fotos-kostenlos/3d-urnen-wahltag-uebertragen-von_958104.htm
Romanishyn, A., Malytska, O., & Goncharuk, V. (2025). AI-driven misinformation: Policy recommendations for democratic resilience. Frontiers in Artificial Intelligence, 8, 1569115.
Yan, H. Y., Morrow, G., Yang, K. C., & Wihbey, J. (2025). The origin of public concerns over AI supercharging misinformation in the 2024 US presidential election. Harvard Kennedy School Misinformation Review.