Conference Session Report: 6th IPILM-Conference on 12.12.2024

Image credit: cyano66/iStock, Audience following fake news on television. Uploaded on March 12, 2024, Italy. 1

Summary

The presentation explored the dual impact of AI on democracy, highlighting its transformative potential but also significant risks. While AI drives innovation, it also enables disinformation and misinformation through deepfakes and fake content and undermines trust in institutions and public opinion. A self-conducted survey of 40 students found that the impact on political opinion is limited, but trust in democratic systems is slightly weakened.

It also indicated that public awareness of AI-generated disinformation is low, highlighting the urgent need for education and regulation. Case studies, including the US and EU elections, showed how difficult it is to detect disinformation and assess its impact. The presentation concluded with a call for ethical AI practices, stronger regulation and public awareness efforts to preserve democratic integrity in the face of evolving technological challenges.

Discussion

Following the presentation, the discussion addressed several questions on AI-driven disinformation, including the following:

An audience member asked how to enhance awareness of its impact on public opinions and trust in democracy. The group emphasized the importance of education through public campaigns and media literacy programs, as well as providing tools to help users better assess digital content, such as the checklist from the European Digital Media Observatory on detecting AI-generated content and the Deepfake Detection Challenge from MIT Media Lab.

“Disinformation is well a long-standing problem, but given that AI techniques present in the digital ecosystem create new possibilities to manipulate individuals effectively and at scale, multiple ethical
concerns arise or are exacerbated.”
(Bontridder, N., & Poullet, Y. 2021: 5)2

Another question focused on how initiatives like IPILM could tackle the issue in communities with varying levels of digital literacy. The group emphasized the need for tailored approaches, recommending advanced tools for digitally literate audiences and more user-friendly methods for those with lower digital skills, supported by partnerships with local organizations and community leaders to ensure accessibility and cultural relevance.

The final question addressed how social media platforms can label AI-generated content without infringing on freedom of expression. The group stressed the need for clear, standardized practices that prioritize transparency and education over restrictions, ensuring that labeling informs rather than controls user behavior. The AI Regulation for Public Service Media Analysis by the European Broadcasting Union provides valuable insights into regulatory approaches that balance transparency and accountability with the protection of free expression.

Overall, the group pointed to a balance of awareness, tailored strategies, and ethical standards as crucial to managing the complex influence of AI-driven disinformation.


Our Video

Aktivieren Sie JavaScript um das Video zu sehen.
https://www.youtube.com/watch?v=SCZnniTrBDw

Further reading & information

European Digital Media Observatory. (2024, April 5). Tips for users to detect AI-generated content. Retrieved from https://edmo.eu/publications/tips-for-users-to-detect-ai-generated-content/ (last access: 15/01/2025).

MIT Media Lab. (n.d.). Detect DeepFakes: How to counteract misinformation created by AI. Retrieved from https://www.media.mit.edu/projects/detect-fakes/overview/ (last access: 15/01/2025).

Wistehube, S. (2024, September 13). AI regulation: Are public service media’s needs being met? European Broadcasting Union. Retrieved from https://www.ebu.ch/guides/open/report/ai-regulation-public-service-media-analysis (last access: 15/01/2025).

  1. Image credit: cyano66/iStock, Audience following fake news on television. Uploaded on March 12, 2024, Italy. https://www.istockphoto.com/de/foto/audience-following-fake-news-on-television-gm2062674413-564020853 ↩︎
  2. Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation.
    Data & Policy, 3, e32. https://www.cambridge.org/core/services/aop-cambridge-core/content/view/7C4BF6CA35184F149143DE968FC4C3B6/S2632324921000201a.pdf/the-role-of-artificial-intelligence-in-disinformation.pdf (last access: 15/01/2025). ↩︎