Intercultural perspectives on information literacy and Metaliteracy (IPILM)

IPILM is a learning environment that promotes collaborative knowledge construction among students from diverse cultural backgrounds. Educators and learners from various countries take part in an intercultural learning endeavor.

Page 2 of 9

AI and the Ethics of Producing Digital Media

📌 Overview

This session was part of the course Intercultural Perspectives on Information Literacy and Metaliteracy (IPILM, Winter Semester 2025/26) and was presented at the IPILM Conference on December 19, 2025.
It examined how artificial intelligence is transforming the production of digital media and which ethical challenges arise from this development. The focus was on authorship, responsibility, bias, trust and the future role of human creativity.

🎥 Screencast

The contribution was presented as a screencast designed as an Open Educational Resource (OER).
The screencast combines scientific research with practical insights from artists and case studies.

Aktivieren Sie JavaScript um das Video zu sehen.
https://youtu.be/CjLz9Lz54Wc?si=1t2BMr5Ld0mPS3Iz

📚 Scientific Foundations

The analysis builds on interdisciplinary research on AI and ethics in digital media:

  • Generative AI challenges traditional ideas of creativity, authorship, and originality (Das & Kundu, 2024).
  • Current copyright law struggles with AI-generated works because it is based on human authorship (U.S. Copyright Office, 2025).
  • AI systems can reproduce social bias, rely on large-scale data extraction, and raise privacy concerns (Al-kfairy et al., 2024).
  • Deepfakes and synthetic media threaten trust, journalism, and democratic processes (Karnouskos, 2020).
  • AI can widen existing digital inequalities while simultaneously lowering barriers to creative production (Lutz, 2019).

🖼️ Case Study: Théâtre d’Opéra Spatial

The artwork Théâtre d’Opéra Spatial by Jason Allen was created using Midjourney and awarded at a Colorado State Art Fair.

  • The artwork sparked global debate about authorship, originality, and fairness.
  • Copyright protection was later denied due to insufficient human authorship.
  • The case illustrates the gap between technological innovation and existing legal frameworks.
Image source: U.S. Copyright Office. (2025). Copyright and artificial intelligence, part 2: Copyrightability.

🎤 Qualitative Interviews with Artists

Semi-structured qualitative interviews were conducted with artists from different disciplines and regions.

Key insights include:

  • AI is widely perceived as a tool rather than a replacement for human creativity.
  • Ethical responsibility depends strongly on how AI is used, not solely on the technology itself.
  • Major concerns include misrepresentation, bias, copyright uncertainty, and deepfakes.
  • Some artists view AI as an additive creative partner, while others emphasise the irreplaceable value of the human hand.

The full interviews are included at the end of the following screencast:

Aktivieren Sie JavaScript um das Video zu sehen.
https://youtu.be/h6WaOaOH5kc?si=KVB8roCpMjJYvWcA

💬 Discussion Highlights

The discussion focused on the future of art in the age of AI:

  • Will AI replace human-made art or function as an additive tool, similar to photography?
  • Who bears responsibility for ethical AI use: artists, developers, platforms, or regulators?
  • How can audiences distinguish AI-generated from human-created content?

Emerging technical solutions such as C2PA (Coalition for Content Provenance and Authenticity) standards were referenced as potential mechanisms to support transparency and verifiable content provenance.

📄 Session Report

A detailed written session report is available here:

🔗 References

Das, S., & Kundu, R. (2024). The ethics of artificial intelligence in creative arts.

    Al-kfairy, M., et al. (2024). Ethical challenges and solutions of generative AI.

    Karnouskos, S. (2020). Artificial intelligence in digital media: The era of deepfakes.

    Lutz, C. (2019). Digital inequalities in the age of artificial intelligence.

    U.S. Copyright Office. (2025). Copyright and artificial intelligence, part 2: Copyrightability.

    AI and Access, e.g. Education, Job Market

    Landingpage for conference session on the IPILM blog: 7th IPILM-Conference

    AI-generated with chatgpt

    Quick facts and insights

    Role of AI
    • AI broadens access via adaptive tutoring, predictive analytics, and multilingual support (Yeo & Lansford, 2025).
    • ITS (Intelligent Tutoring Systems) demonstrate measurable improvements in learning outcomes in diverse contexts (Holmes et al., 2019).
    • UNESCO (2024) stresses AI’s potential in low‑resource environments when supported by policy.
    What are important questions?
    • What is AI’s role in improving access to education for diverse learners?
    • How does AI help in personalised learning, and why is it important for inclusive education?
    • In what ways is AI transforming the job market and creating new opportunities?
    • What skills do learners need to stay relevant in an AI-driven job environment?
    • How can AI support equal access to job information and career guidance?

    Summary of the topic

    This blog examined the role of artificial intelligence in improving access to education and its broader implications for the job market. A key focus was on how AI can support inclusive education through personalized learning, intelligent tutoring systems, adaptive feedback, and multilingual support, thereby addressing diverse learning needs. At the same time, the presentation critically discussed structural and ethical challenges, including algorithmic bias, data protection and privacy risks, limited transparency of AI systems, and generally low levels of AI literacy among users. In addition, the presentation highlighted global inequalities in access to AI, emphasizing that countries with stronger digital infrastructure and higher AI preparedness benefit more from AI adoption, while others risk being left behind. Two empirical case studies were used to support these points: one analyzing teachers’ trust in AI in education across different countries, and another examining the impact of generative AI on employment, skill requirements, and labor market inequalities. Overall, this emphasized that AI offers significant opportunities, but only if implemented responsibly, ethically, and with equal access in mind.

    AI-generated with chatgpt

    Aktivieren Sie JavaScript um das Video zu sehen.
    https://www.youtube.com/watch?v=l6d_0PB0Pbg
    The video shows why job losses are occurring in some areas, which professions are particularly affected, and which skills will be crucial in the future. It also discusses how to strategically develop in your current job, the continuing role of education, and why building a strong personal positioning is becoming increasingly important in the age of AI. Finally, a clearly structured three-step approach is presented for remaining professionally relevant in the long term.
    https://www.youtube.com/watch?v=l6d_0PB0Pbg

    🟢Advantages of AI in Education

    (UNESCO, 2024), (Yeo & Lansford, 2025), (Holmes et al., 2019)

    Healthcare

    AI simulations allow students to practise surgeries and diagnoses safely.

    Predictive models help students understand real-world medical decision-making.

    Multilingual virtual assistants support global medical learners.

    Finance

    AI financial modelling tools prepare students for real-market scenarios.

    Risk-assessment simulations improve practical decision-making skills.

    Adaptive learning helps students strengthen weak conceptual areas.

    Education

    Personalised learning using reinforcement-learning models.

    ITS improves learning outcomes across diverse learners.

    AI improves accessibility for learners with disabilities through speech-to-text, translation, etc

    Aktivieren Sie JavaScript um das Video zu sehen.
    https://www.youtube.com/watch?v=hJP5GqnTrNo
    Al Khan, founder of Khan Academy, is convinced that artificial intelligence can greatly improve the education system. He shows how AI can support students through personalized learning assistance and teachers through digital assistance systems, and introduces new features of the educational chatbot Khanmigo.
    https://www.youtube.com/watch?v=hJP5GqnTrNo

    🔴Disadvantages of AI in Education

    (Marín et al., 2025), (Sahar & Munawaroh, 2025), (Al-Zahrani & Alasmari, 2024)

    Healthcare

    In the healthcare sector, bias, data protection issues, and a lack of transparency can lead to incorrect or unfair AI decisions, while a lack of human contact and low AI literacy further complicate care.

    Finance

    In the financial sector, bias, data protection risks, and non-transparent AI models have a significant impact on fairness and trust, especially when professionals lack AI expertise.

    Education

    In education, the disadvantages of AI mainly concern academic integrity, data protection, algorithmic fairness, lack of human support, and generally low AI literacy.


    Key findings from two relevant case studies

    Case Study 1 Job Market
    Gen-AI: Artificial Intelligence and the Future of Work.
    (Cazzaniga et al. 2024)
    • Highly skilled jobs are most affected by AI, but also benefit the most (increased productivity, better wages).
    • Low-skilled and older workers are at greatest risk of being disadvantaged by AI.
    • Women and knowledge workers are particularly exposed to AI.
    • AI can exacerbate inequalities, especially in countries with poor digital preparedness.
    • The US/UK are well prepared, emerging markets less so, resulting in large global differences.
    Case Study 2 Education
    What Explains Teachers’ Trust in AI in Education Across Six Countries?“(Viberg et al., 2025)
    • Perceived benefits ↑ → Trust ↑; Concerns ↑ → Trust ↓. These two were the strongest predictors of trust.
    • AI self-efficacy & AI understanding strongly increased perceived benefits and reduced concerns — indirectly boosting trust.
    • Demographics (age, gender, education) did not significantly influence trust.
    • Cultural values mattered: High uncertainty avoidance, collectivism, and masculinity were associated with differences in trust and concerns.
    • Cross-country variation: Brazil, Israel, and Japan showed higher trust; Norway, Sweden, and USA showed lower trust after adjustments.

    Watch our Screencast here🔽

    https://www.youtube.com/watch?v=6FHxBVBWicE


    📖References

    Click here for references
    • Al-Zahrani, A.M., Alasmari, T.M. (2024): Exploring the impact of artificial intelligence on higher education: The dynamics of ethical, social, and educational implications. Humanit Soc Sci Commun 11, 912 https://doi.org/10.1057/s41599-024-03432-4.
    • Cazzaniga et al. (2024): “Gen-AI: Artificial Intelligence and the Future of Work.” IMF Staff Discussion Note SDN2024/001, International Monetary Fund, Washington, DC. https://doi.org/10.5089/9798400262548.006 .
    • Holmes, W., Bialik, M., & Fadel, C. (2019): Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.https://curriculumredesign.org/wp-content/uploads/AIED-Book-Excerpt-CCR.pdf .
    • Marín, Y. R., Caro, O. C., Rituay, A. M. C., Llanos, K. A. G., Perez, D. T., Bardales, E. S., Tuesta, J. N. A. & Santos, R. C. (2025): Ethical Challenges Associated with the Use of Artificial Intelligence in University Education. Journal Of Academic Ethics, 23(4), 2443–2467. https://doi.org/10.1007/s10805-025-09660-w .
    • Sahar, R. & Munawaroh, M. (2025): Artificial intelligence in higher education with bibliometric and content analysis for future research agenda. Discover Sustainability, 6(1). https://doi.org/10.1007/s43621-025-01086-z .
    • UNESCO. (2024): AI and inclusive education: Policy guidance for promoting equity. United Nations Educational, Scientific and Cultural Organization. https://doi.org/10.54675/PCSP7350 .
    • Viberg, O., Cukurova, M., Feldman-Maggor, Y., Alexandron, G., Shirai, S., Kanemune, S., Wasson, B., Tømte, C., Spikol, D., Milrad, M., Coelho, R. & Kizilcec, R. F. (2025): What Explains Teachers’ Trust in AI in Education Across Six Countries? International Journal of Artificial Intelligence in Education, 35, 1288–1316. https://doi.org/10.1007/s40593-024-00433-x.
    • Yeo, G. & Lansford, J. E. (2025): Effects of Artificial Intelligence on Educational Functioning: A Review and Meta-Analysis. Educational Psychology Review, 37(4). https://doi.org/10.1007/s10648-025-10085-5 .

    Conference evaluation

    Dear students, instructors and visitors,

    thank you for taking part in the 7th Conference on “Intercultural Perspectives on Information Literacy and Metaliteracy”.

    As mentioned, we created a survey to gather your opinions and thoughts on the conference. It’s completely anonymous and takes about 3-5 Minutes. We’d ask you to participate so we can improve future conferences that meet your expectations and interests: https://survey.academiccloud.de/index.php/919939?lang=en

    – Your IPILM Team

    Keynote on IPILM Conference 2025

    We’re happy to announce Dr. Nicola Marae Allain as our keynote speaker on the 7th onlince conference on “Intercultural perspectives on information literacy and meta literacy” who will talk about

    Mindful Metaliteracy in the Age of Generative AI: Attention, Reflection, and Human Agency

    Abstract
    Generative AI is reshaping how we create, interpret, and communicate knowledge, making reflective judgment and intentional meaning-making more essential than ever. This keynote explores how a metaliterate emphasis on metacognition, authorship, and ethical participation aligns with Ellen Langer’s scholarly work on mindfulness as active, context-sensitive awareness and orientation to learning. Drawing briefly on a classical story from the Zhuangzi on the relationship between mind and machine, the talk highlights how learners can approach AI-generated texts and imagery with greater attention, flexibility, and creative autonomy. The SUNY FACT2 AI Guides and evaluation instrument and Allain and Mackey’s forthcoming book on AI and Metaliteracy are introduced as practical tools for supporting ethical and intentional adoption. Through examples from visual research and interdisciplinary practice, the session demonstrates how mindful metaliteracy can cultivate more thoughtful, creative, and human-centered engagement with generative AI.

    You can find the whole presentation here.


    Dr. Nicola Marae Allain

    Dr. Nicola Marae Allain is Professor of Arts and Media at SUNY Empire State University and co-editor of AI and Metaliteracy: Empowering Learners for the Generative Revolution (Allain & Mackey, Bloomsbury Publishing, 2025). She co-authored the 2nd and 3rd editions of the SUNY FACT² Guide to Optimizing AI in Higher Education and co-chairs SUNY FACT² committees on AI for Teaching, Learning, and Accessibility. Her research and creative practice span digital media arts, visual pedagogy, and emerging technologies, exploring how the arts, culture, and innovation intersect to shape reflective and ethical learning in digital environments. Dr. Allain brings a global, interdisciplinary, and multilingual perspective. She is dedicated to empowering learners to engage critically, creatively, and ethically in an evolving technological world.

    The online conference takes place on 11th december 2025, 14.00 – 17.45 CET via BigBlueButton: https://meet.gwdg.de/b/joa-fwe-eor-dys.

    Online conference in December 2025

    On December 11th, 2025 the 7th online conference on “Intercultural perspectives on information literacy and meta literacy” will take place.

    Students from Austria, Germany, India, Poland and US will present the results of their research on six topics about the influence of AI in various areas. You can find all information, time slots and the link for participation on the conference poster below:

    IPILM project presented at IVEC 2025

    The International Virtual Exchange Conference (IVEC) took place from 14th to 17th October 2025, in Heraklion, Greece. The facilitated discussion on

    The influence of Artificial intelligence on relational dynamics in Virtual Exchange (VE)

    focused on two key questions:

    1. How is AI currently used in Virtual Exchange to enhance relational dynamics and support intercultural collaboration?

    2. What areas of AI application could further strengthen collaboration, and what ethical considerations must be adressed?

    The team leading the session consisted of instructors of two projects which experienced an increased use of AI among participating students to enhance task-based collaboration: Our project Intercultural Perspectives on Information literacy and Metaliteracy (IPILM) and the Global Case Study Challenge.

    Participants of the discussion brainstormed AI driven solutions to promote more effective communication and cultural sensitivity and to support conflict resolution while considering ethical challenges, f.e. bias and over reliance on technology. The results should provide a foundation for future research and pratical implementation of AI tools to deepen collaboration in Virtual Exchange endeavors.

    You can find the presentation slides here.

    Speakers of the session on the IVEC 2025: Joachim Griesbaum, Eithne Knappitsch and Stefan Dreisiebner
    Speakers of the session on the IVEC 2025: Joachim Griesbaum, Eithne Knappitsch and Stefan Dreisiebner
    « Older posts Newer posts »