The conference report from our last online conference on 11th december 2025 is available now. You can find the report on our conference page.
Tag: Online Conference 2025 (Page 1 of 2)
The slides for the keynote “Mindful Metaliteracy in the Age of Generative AI: Attention, Reflection, and Human Agency” (Nicola Marae Allain) from our last online conference are online now. You can find them here.
Welcome to the page!
The central aspects of our presentation will give you an introduction into use cases for AI, stakeholder interest and potential benefits AI promises.
Watch the full video here, or click the play button below:
Don’t have time to watch? No problem!
Session report and key aspects:
The presentation started with a brief introduction to AI and generative AI, followed by the section on roles of AI. There, three potential areas were introduced: customer experience, talent management and productivity, as well as risk management and governance.

Particular attention was given to introducing the stakeholder groups:
individual, organizational, and national and international stakeholders. The interest and power structure of each group was introduced, and showed their connectivity and interdependence.

The perceived value of AI for individual stakeholders depends on life stage and occupational status, aligning with the distinct priorities of each group (UN Human Development Report 2025).
Used in the right way, AI may offer an opportunity to expand human capabilities. Institutional and social choices can enable AI to expand people’s capabilities and agency, as illustrated through AI’s applications for people with disabilities, care systems and gender equality, as well as in conceptualizing and mitigating AI bias. The following section on value propositions presented potential positive values.

A part of the presentation also addressed several risks connected to AI. With a critical eye, we highlighted the risk of algorithmic bias, AI’s environmental impact, data privacy breaches, lack of transparency and cognitive debt.

Discussion
This section addresses the key questions raised during the conference and summarizes the main points of discussion among participants.
The group was asked about their individual views on the personal value of AI tools. A variety of answers was given: while there were participants who did not view AI as particularly valuable, other participants saw great personal value in using the tools. Depending on the use cases and tasks, the majority saw a positive value.
Referencing the keynote speech on mindfulness, an open-ended question wanted the participants to share if they thought of AI as an inevitable feature and if humans would lose innate skills through the use of AI. This question sparked a discussion with different directions and focal points. The art of photography was named as an example for all three issues and sparked a dynamic discussion.
Another question wanted the participants to share their views on whether AI literacy should be introduced at an earlier age. The participants discussed that other types of literacy skills were introduced during childhood to prepare children at an early age and continually increase their skill set. The group members came to the conclusion that it should be similar with AI literacy education.
If you want to read more:
Additional resources
11. January 2026 / Neele Baumgart
Conference Session Report: 7th IPILM-Conference on 11.12.2025

Original illustration, AI-generated with ChatGPT (OpenAI).
What happens when artificial intelligence becomes part of mental health care, and how should we deal with its risks and responsibilities?
Key Focus of the Session
- AI as a support tool in mental health
- Benefits: Early detection, accessibility, continuous support
- Risks: Data protection, bias, transparency
- Cultural and social contexts shaping perceptions, use, and risks of AI
- Importance of information literacy and meta literacy
Building on these focal points, the Session examined the potential and limitations of artificial intelligence in the field of mental health from an information literacy and metaliteracy perspective. Drawing on a systematic literature review and concept mapping, it showed that AI-based applications can offer advantages, particularly with regard to early detection, continuous support, and low-threshold accessibility.
These findings were largely consistent across the reviewed literature and were primarily informed by two key studies that shaped the session.
Scientific evidence on the effectiveness and acceptance of AI-based mental health applications was mainly drawn from Dehbozorgi et al. (2025 – Read More).
In contrast, ethical, cultural, and epistemic risks, such as data protection concerns, algorithmic bias, and limited transparency, were largely informed by the ethical review of Saeidnia et al. (2024 – Read more).
The international survey largely reflected and reinforced the risks discussed in these studies, while also illustrating how these issues are perceived in practice. Overall, the findings emphasized that AI in mental health contexts should primarily be understood as a complementary tool to human expertise and that well-developed information literacy and metaliteracy are essential for responsible use.
❗️Below are a few selected examples of mental health services that incorporate AI-based support tools.
Therapeak; VIA HealthTech; Wysa & Woebot
The examples illustrate current applications of AI in mental health and are not intended as recommendations.
Cultural and Ethical Considerations
Mental health is deeply shaped by cultural norms, social stigma, and structural inequalities, an aspect that was central to the intercultural perspective of the session and the conference as a whole.
These factors also influence how AI-based systems are developed and used. AI applications risk reinforcing existing disparities through biased data, data poverty, and predominantly Western-centered models of mental health. Ethical challenges such as privacy, autonomy, and emotional adequacy are therefore particularly intensified for vulnerable and marginalized groups, highlighting the need for culturally sensitive and ethically grounded AI design.
Discussion
The discussion focused in particular on questions of responsibility. A majority of participants attributed responsibility for potentially harmful or misleading AI-based advice primarily to the providing companies, indicating a strong demand for institutional safeguards while simultaneously raising questions about the role of user responsibility. From an information literacy and metaliteracy perspective, this highlights the importance of enabling users to critically assess AI-based systems, understand their limitations, and recognize potential risks.
At the same time, individual awareness alone cannot replace structural responsibility, especially in light of asymmetrical power and knowledge relations between providers and users, as well as the vulnerability of mental health contexts.
Another key point concerned the ambivalent level of trust in AI within mental health applications. Although many participants expressed general openness toward the use of AI, trust remained limited due to concerns about data protection, reliability, and the quality of AI-generated advice. Increasing trust was found to depend on transparent system design, strong data protection measures, explainable decision-making processes, and the clear integration of AI into human-supported care structures.
Overall, the discussion suggests that trust in AI is shaped less by technological performance alone than by ethical design, cultural sensitivity, and informed and reflective practices of use.
Key Takeaways
- AI can meaningfully support mental health care, but its value depends on ethical design, cultural sensitivity, and human oversight.
- Users tend to view AI as a supportive tool rather than a replacement for professional care, while concerns about privacy and trust remain strong.
- Cultural context plays a significant role in shaping how AI-based mental health services are perceived and used.
- Strong information literacy and metaliteracy are essential for enabling critical, informed, and responsible engagement with AI in mental health contexts.
Our Screencast
The Screencast summarizing our session and key findings is available on YouTube:
🎬 Watch the Screencast on YouTube
Further Reading
The following publications provide further insights into the scientific, ethical, and informational dimensions of AI in mental health contexts.
Dehbozorgi, R., Zangeneh, S., Khooshab, E. et al. The application of artificial intelligence in the field of mental health: a systematic review. BMC Psychiatry 25, 132 (2025).
https://doi.org/10.1186/s12888-025-06483-2
Li, H., Zhang, R., Lee, Y. C., Kraut, R. E., & Mohr, D. C. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ digital medicine, 6(1), 236.
https://doi.org/10.1038/s41746-023-00979-5
Pellert, M., Lechner, C. M., Wagner, C., Rammstedt, B., & Strohmaier, M. (2024). AI Psychometrics: Assessing the Psychological Profiles of Large Language Models Through Psychometric Inventories. Perspectives on psychological science : a journal of the Association for Psychological Science, 19(5), 808–826. https://doi.org/10.1177/17456916231214460
Saeidnia, H. R., Hashemi Fotami, S. G., Lund, B., & Ghiasi, N. (2024). Ethical Considerations in Artificial Intelligence Interventions for Mental Health and Well-Being: Ensuring Responsible Implementation and Impact. Social Sciences, 13(7), 381.
https://doi.org/10.3390/socsci13070381
Key Takeaways
- AI can provide emotional comfort and a sense of social presence.
- Users often perceive AI as non-judgmental and constantly available.
- Emotional attachment to AI is possible, but true reciprocity is missing.
- AI tends to replace functional roles rather than deep emotional relationships.
- AI complements human relationships but does not replace them.

AI-generated image created using ChatGPT (DALL·E), 2026.
Introduction
This session aimed to provide an initial overview of the question of whether artificial intelligence can function as a substitute for human relationships. With AI tools increasingly used not only for information retrieval but also for emotional and social interaction, this topic has gained growing relevance in both academic research and everyday life.
The presentation combined a theoretical perspective, based on key findings from current scientific literature, with a practical approach. While the theoretical part outlined how human–AI relationships are conceptualized and evaluated in research, the practical component presented an exploratory survey to illustrate a possible research approach and to identify early tendencies in user experiences.
Summary
The presentation focused on two key academic studies and an exploratory survey to examine how AI may take on social and emotional roles traditionally associated with human relationships. The presented studies are: Brandtzaeg et al. (2022) showed that users candevelop friend-like attachments to social chatbots, perceiving them as safe and non-judgmental, while emphasizing the lack of reciprocity and emotional depth. Smith et al. (2025) further highlighted that although generative AI can convincingly simulate emotional responsiveness, it lacks key psychological components of genuine human connection, such as mutuality, shared experience, and emotional depth, which limits its ability to fully replicate human relationships.
In addition, an exploratory online survey was conducted to demonstrate a possible
research approach and to identify initial tendencies, such as emotional comfort,
functional role substitution, and perceived non-judgment. Further details on the survey design, sample characteristics, and key findings are presented in the screencast linked below.
Discussion: Questions, Answers, and Reflections
During the discussion, a few questions focused on the methodology of the survey and the validity of its results. Participants critically addressed the small and non-representative sample. In response, it was emphasized that the survey was intended as an exploratory approach rather than a source of generalizable conclusions. Its purpose was to illustrate how human–AI relationships can be empirically examined and to reveal early tendencies that may guide future research. These included the frequent use of AI for emotional comfort, the perception of AI as less judgmental than humans, and the limited replacement of human roles.
Another discussion point concerned whether and how emotionally responsive AI systems should be regulated. It was debated whether emotional support provided by AI should be restricted and, if so, how “too emotional” AI could be defined. While arguments for regulation often focus on preventing emotional dependence, potential benefits are also emphasized, particularly AI’s role as a low-threshold form of support for individuals experiencing loneliness or social anxiety.
Finally, the discussion addressed broader opportunities and risks. Opportunities
included availability, emotional relief, and reduced social pressure, whereas risks
centered on privacy concerns, emotional dependence, and the potential weakening of real-life social relationships. Overall, the discussion underscored the need for continued critical reflection and interdisciplinary research on emotional AI.
Screencast
Below you can find the screencast of our presentation “AI as a Substitute for Human Relationships”, which summarizes the theoretical background and the practical insights discussed during the session.
Further Reading & Resources
Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2022). My AI friend: How users of a social chatbot understand their human–AI friendship. Human Communication Research, 48(3), 404–429. https://doi.org/10.1093/hcr/hqac008
Smith, M. G., Bradbury, T. N., & Karney, B. R. (2025). Can generative AI chatbots emulate human connection? A relationship science perspective. Perspectives on Psychological Science, 20(6), 1081–1099. https://doi.org/10.1177/17456916251351306
Hohenstein, J., Kizilcec, R. F., DiFranzo, D., Aghajari, Z., Mieczkowski, H., Levy, K., & Jung, M. F. (2023). Artificial intelligence in communication impacts language and social relationships. Scientific Reports, 13, 5487. https://doi.org/10.1038/s41598-023-32354-5
Malfacini, K. (2025). The impacts of companion AI on human relationships: Risks, benefits, and design considerations. AI & Society. https://doi.org/10.1007/s00146-025-02318-6
Zimmerman, A., Janhonen, J., & Beer, E. (2024). Human/AI relationships: Challenges, downsides, and impacts on human/human relationships. AI and Ethics, 4, 1555–1567. https://doi.org/10.1007/s43681-023-00348-8
14. January 2026 / Mehić

Summary
Artificial intelligence is increasingly shaping political communication and democratic processes. Generative AI systems such as deepfakes, automated text generation, and AI-supported political campaigning are transforming how political information is produced, disseminated, and perceived. The screencast “AI and Political Dis- and Misinformation” examined these developments from a comparative perspective, combining current research, international case studies, and exploratory empirical observations.
This landing page summarizes the key arguments and insights presented in the screencast and discussed during the session.
Screencast
The following screencast provides an overview of current research on AI-driven political misinformation, comparative case studies from different national contexts, and exploratory empirical findings discussed during the conference.
▶ Access the screencast (YouTube)
Research Context: AI, Politics and Misinformation
Drawing on recent research in political science and media studies, the screencast situated AI-driven misinformation within broader sociopolitical debates about power, manipulation, and democratic stability. Scholars emphasize that AI does not merely accelerate existing forms of disinformation, but qualitatively transforms political propaganda by increasing its scale, personalization, and plausibility (Gaborit, 2024; Romanishyn et al., 2025).
Particular concern has been raised about the capacity of generative AI to undermine trust in political information ecosystems, especially during election periods. Public anxieties around AI and misinformation are often shaped not only by concrete incidents, but also by media narratives and broader fears about democratic erosion (Yan et al., 2025).
Comparative Case Studies
To ground these debates empirically, the screencast presented three case studies focusing on recent electoral contexts in the United States, India, and Germany.
United States
The U.S. case focused on the 2024 presidential election and the origins of public concern about AI “supercharging” political misinformation. Research suggests that fears of AI-driven manipulation were strongly influenced by public discourse and media coverage, even in cases where documented AI misuse remained limited (Yan et al., 2025). This highlights the importance of perception, trust, and anticipatory regulation in democratic contexts.
India
The Indian case demonstrated more direct forms of AI misuse in political communication. Generative AI tools, including deepfakes and manipulated audiovisual content, were actively deployed during the 2024 elections, contributing to misinformation, voter confusion, and political polarization. These developments illustrate both the technological possibilities and democratic risks of AI in highly mediated political environments (Dhanuraj et al., 2024).
Germany
The German case examined AI-based voter information tools developed ahead of the 2025 federal elections. Although intended to provide neutral political guidance, these systems sometimes produced biased or misleading outputs. This case serves as a cautionary example of how “neutrally” informative AI tools can unintentionally become sources of political misinformation, raising questions about transparency, accountability, and design assumptions (Dormuth et al., 2025).
Together, the case studies show that AI-driven political misinformation manifests differently across national contexts, but consistently challenges democratic trust and decision-making.
Exploratory Survey Insights
In addition to the case studies, the screencast presented findings from an exploratory online survey conducted in November 2025 with 108 participants from India, the United States, and Germany. The survey examined awareness of AI-generated political content, experiences with political misinformation, trust in political information on social media, and attitudes toward regulation and responsibility.
Across all three countries, respondents reported frequent exposure to political misinformation and relatively low confidence in their ability to identify AI-generated fake content. Trust in political information shared on social media platforms was generally low. These findings are tentative and illustrative, and primarily serve to contextualize the case studies rather than to provide representative conclusions.
Discussion and Ethical Implications
The discussion following the screencast addressed key normative and practical tensions. A central debate concerned prevention versus detection: whether democratic responses should focus on restricting and labeling AI-generated political content or prioritize detection mechanisms and citizen awareness.
Closely related was the question of regulation and education. Participants emphasized that regulation and AI literacy should not be understood as mutually exclusive, but rather as complementary strategies. Given the increasing sophistication of generative AI systems, even digitally literate users remain vulnerable, underscoring the need for combined policy and educational approaches.
Further discussions addressed platform and developer responsibility, the risk of bias in AI systems, and the limits of existing legal frameworks in addressing AI-driven electoral manipulation.
Conclusion
The session and screencast demonstrated that AI-driven political misinformation represents a serious and evolving challenge for democratic societies. While national contexts differ, the underlying issues of trust, transparency, and accountability are shared across political systems.
Addressing these challenges requires interdisciplinary responses that combine regulatory frameworks, responsible AI design, platform accountability, and public AI literacy. As generative AI continues to develop, proactive and ethically informed strategies will be essential to safeguard democratic communication.
References
Dhanuraj, D., Harilal, S., & Solomon, N. (2024). Generative AI and its influence on India’s 2024 elections: Prospects and challenges in the democratic process. Friedrich Naumann Foundation for Freedom.
Dormuth, I., Franke, S., Hafer, M., Katzke, T., Marx, A., Müller, E., & Rutinowski, J. (2025). A cautionary tale about “neutrally” informative AI tools ahead of the 2025 federal elections in Germany. In Proceedings of the World Conference on Explainable Artificial Intelligence (pp. 64–85). Springer Nature Switzerland.
Gaborit, P. (2024). A sociopolitical approach to disinformation and AI: Concerns, responses and challenges. Journal of Political Science and International Relations, 7(4), 75–88.
kjpargeter. (n.d.). 3D Urnen Wahltag übertragen von [Image]. Freepik. Retrieved from: https://de.freepik.com/fotos-kostenlos/3d-urnen-wahltag-uebertragen-von_958104.htm
Romanishyn, A., Malytska, O., & Goncharuk, V. (2025). AI-driven misinformation: Policy recommendations for democratic resilience. Frontiers in Artificial Intelligence, 8, 1569115.
Yan, H. Y., Morrow, G., Yang, K. C., & Wihbey, J. (2025). The origin of public concerns over AI supercharging misinformation in the 2024 US presidential election. Harvard Kennedy School Misinformation Review.