Intercultural perspectives on information literacy and Metaliteracy (IPILM)

IPILM is a learning environment that promotes collaborative knowledge construction among students from diverse cultural backgrounds. Educators and learners from various countries take part in an intercultural learning endeavor.

Page 2 of 8

IPILM 2025 Summer workshop Information

Who should participate?

Students who are eager to learn about topics related to AI, information literacy, and Metaliteracy in an intercultural learning scenario.

Course content and schedule

Start: July 21, 2025

End: August 1, 2025

The scenario: This transnational online course explores intercultural perspectives on information literacy and metaliteracy in a connected world that is increasingly shaped by artificial intelligence. This study will prepare students from multiple disciplines including The Arts and Digital Arts for analyzing and producing digital media in collaborative information environments.

The learning tasks

  • AI and warfare
  • AI impact on local culture: Language
  • AI literacy for students and students learning process
  • Ethics of producing digital media art with GenAI
  • AI information literacy
  • AI impact on information security & data privacy
  • AI and Climate Change

Learning requirements

✓ Most importantly, be curious and open-minded

✓ English language proficiency: The course is fully held in English

✓ A substantial time commitment is expected

✓ Authentic and respectful way of communicating

Participation is not guaranteed, as the number of places is limited.

Participating institutions

Carinthia University of Applied Science; Empire State University, SUNY (State University of New York); Kalinga Institute of Industrial Technology; Symbiosis College of Arts & Commerce Pune; University of Hildesheim; SWPS University Warsaw; University of Sarajevo

Course poster with schedule, learning scenario and tasks, requirements and participating institutions

„Seekers of Truth in a Digital World“- Alumni reflection on IPILM course

Gabrielle Lerner, an alumna of an IPILM course, shares her reflections on metaliteracy.org, recounting her transformative learning journey.

Reflecting on her experience, she writes:

“About three years ago, I had the incredible opportunity to participate in this course, and it remains one of the most profound educational experiences of my life. The course, Intercultural Perspectives on Information Literacy and Metaliteracy (IPILM), brought together a diverse team of students from around the world, including Germany, Bosnia-Herzegovina, Austria, Poland, India, and the United States. Our differences enriched our collaboration, weaving a vibrant tapestry of perspectives as we explored complex questions about truth, bias, and the framing of information. I am deeply grateful for this experience and for the opportunity to engage with such a talented and thoughtful group of individuals.”

Gabrielle’s reflection highlights the unique intercultural dynamics and thought-provoking challenges that make the IPILM course an exceptional learning opportunity.

Ethics of producing digital art

"Art Gallery" by Ryan McGuire/ CC0 1.0Art Gallery” by Ryan McGuire/ CC0 1.0

Technology changes the world, especially the art world

Artificial intelligence is becoming a vital part of our society, driving innovation and simplifying tasks across many areas of life. However, it also brings challenges such as privacy concerns and the potential for misuse. As AI-generated content becomes more common, it raises crucial ethical questions about authorship, ownership, and creativity. These issues demand thoughtful discussion on regulation and ethical use. Join us in exploring the responsibilities and implications of AI, from creative industries to everyday applications.


Welcome to the world of digital art!

As AI tools revolutionize digital art, they raise crucial ethical questions about authorship and creativity. This is just one of the many engaging topics presented at this year’s IPILM conference. The conference emphasized information literacy in handling AI-generated media, focusing on the responsible creation, dissemination, and access to these media at all levels. Understanding and navigating these digital landscapes is vital. Join us as we explore these critical issues and more.


Summary

This year’s IPILM conference focused on information literacy in dealing with AI tools. The focus was on the mediation, dissemination and access to AI-generated media types in the digital space as well as their handling on an individual, social and institutional level. The presentation addressed the effects of digitally generated art through AI. Copyright, provision and the effects on the art world were discussed. Furthermore the question arose as to how art is defined and whether this understanding needs to be transformed. A survey with 13 respondents from different cultural and academic backgrounds showed that people from the design industry are distrustful of AI because they are afraid of being replaced. At the same time, the innovative nature of AI is recognized, but it can only serve as a tool and cannot replace the artist. The importance of regulation and ethical responsibility in the use of AI tools was also emphasized. The conclusion of the presentation emphasized that GenAI is not only a complex topic for the art scene, but must also be considered from a societal perspective.


Discussion

The topic received positive feedback, reflected in numerous questions post-presentation. Key discussion points included:

  • Copyrights in creating digital art with GenAI tools and proper attribution.
  • Responsibility in using GenAI and who bears it.
  • Definition of art and whether artificial creations qualify.
  • How viewers can identify artificially generated content and the role of information literacy.
  • Responsibility of consumers and users of digital content.

Specific outcomes of the discussion highlighted that managing generated digital content requires media and information usage and literacy skills. Both providers and users of GenAI tools were recognized to have ethical and social responsibilities. Additionally, it was emphasized that consumers must critically evaluate their digital media consumption habits. The project seminar’s aim of teaching information literacy was clearly reflected in the discussion outcomes.


Interesting contents

https://www.youtube.com/watch?v=uA70ZGCC1f4

https://www.artshub.com.au/news/opinions-analysis/exploring-the-ethics-of-artificial-intelligence-in-art-2694121

AI Literacy for Teacher Education

Picture: a human hand and a machine hand (symbolic for AI) reach out to each other

Conference Session – Report Group 5


Introduction

The conference session focused specifically on AI Literacy for Teacher Education, emphasizing the importance of equipping educators with the knowledge and tools to use AI effectively and responsibly in their classrooms. The discussion highlighted the role of AI literacy in teacher education and how it helps teachers navigate ethical challenges while integrating AI tools into their teaching practices.

The session included insights from an international survey of 33 educators from seven countries, exploring their experiences with AI tools like ChatGPT and Kahoot. While the survey offered some valuable insights, it should be noted that the findings were exploratory and not fully conclusive. Therefore, these results were complemented by relevant research literature, which provides a more comprehensive understanding of the role of AI literacy in teacher education


Discussion: Questions and Answers

The discussion focused on the central question of how AI can support teacher education. AI literacy was recognized as essential in helping teachers understand and navigate AI tools. It was emphasized that AI can provide personalized training materials and learning experiences tailored to individual teachers’ knowledge and needs. However, it was also made clear that AI should not be seen as a replacement for teachers but rather as a tool to enhance their teaching practices.

Further points of discussion centered on balancing the use of AI with the need to avoid over-reliance on technology. It was stressed that continuous reflection and ethical evaluation are essential to ensure that AI remains a valuable pedagogical tool. Additionally, the availability of training programs and resources, such as online courses and workshops, was recognized as critical for fostering AI literacy among educators.

Recent studies reinforce the need for AI literacy in teacher education. For example, tools like ChatGPT have been shown to help educators draft lesson plans and assessments, yet studies also point to limitations such as the potential for AI to generate incorrect or biased information. The research advocates for integrating AI literacy into teacher training programs, focusing on ethical use and aligning AI tools with educational goals.

In conclusion, the importance of AI literacy in teacher education cannot be overstated. Equipping educators with the necessary skills to use AI responsibly and effectively will help them enhance their teaching practices while ensuring ethical standards are upheld. The goal should always be to empower educators in the use of AI, fostering critical thinking and responsible technology use within teacher education programs.


Our Literature

Owan, V. J., Abang, K. B., Idika, D. O., Etta, E. O., & Bassey, B. A. (2023). Exploring the potential of artificial intelligence tools in educational measurement and assessment. Eurasia Journal of Mathematics, Science and Technology Education, 19(8), em2307. https://www.ejmste.com/article/exploring-the-potential-of-artificial-intelligence-tools-in-educational-measurement-and-assessment-13428

Whalen, J., & Mouza, C. (2023). ChatGPT: challenges, opportunities, and implications for teacher education. Contemporary Issues in Technology and Teacher Education, 23(1), 1-23. https://www.learntechlib.org/p/222408/

Our Video:

Further Literature:

Sperling, K., Stenberg, C. J., McGrath, C., Åkerfeldt, A., Heintz, F., & Stenliden, L. (2024). In search of artificial intelligence (AI) literacy in Teacher Education: A scoping review. Computers and Education Open, 100169. https://www.sciencedirect.com/science/article/pii/S2666557324000107

Ng, D. T. K., Leung, J. K. L., Chu, K. W. S., & Qiao, M. S. (2021). AI literacy: Definition, teaching, evaluation and ethical issues. Proceedings of the Association for Information Science and Technology58(1), 504-509. https://asistdl.onlinelibrary.wiley.com/doi/abs/10.1002/pra2.487?casa_token=WogKXKvwDOQAAAAA%3A1b3aX33pEVFvqHYgm3DWxYHfljtcemwbZ46CaTYikkSX6buu7wagfuT6JhuCrgRORNU4vsQ9AJU4bAY

Ding, A. C. E., Shi, L., Yang, H., & Choi, I. (2024). Enhancing teacher AI literacy and integration through different types of cases in teacher professional development. Computers and Education Open, 6, 100178. https://www.sciencedirect.com/science/article/pii/S2666557324000193

Further Video about AI and Education:

Aktivieren Sie JavaScript um das Video zu sehen.
https://www.youtube.com/watch?v=sD8lcbebp7Q

Free Workshop for Teachers as an example: 

https://www.aiforeducation.io/ai-course

AI impact on democracy: Mis- and Disinformation


Conference Session Report: 6th IPILM-Conference on 12.12.2024

Image credit: cyano66/iStock, Audience following fake news on television. Uploaded on March 12, 2024, Italy. 1

Summary

The presentation explored the dual impact of AI on democracy, highlighting its transformative potential but also significant risks. While AI drives innovation, it also enables disinformation and misinformation through deepfakes and fake content and undermines trust in institutions and public opinion. A self-conducted survey of 40 students found that the impact on political opinion is limited, but trust in democratic systems is slightly weakened.

It also indicated that public awareness of AI-generated disinformation is low, highlighting the urgent need for education and regulation. Case studies, including the US and EU elections, showed how difficult it is to detect disinformation and assess its impact. The presentation concluded with a call for ethical AI practices, stronger regulation and public awareness efforts to preserve democratic integrity in the face of evolving technological challenges.

Discussion

Following the presentation, the discussion addressed several questions on AI-driven disinformation, including the following:

An audience member asked how to enhance awareness of its impact on public opinions and trust in democracy. The group emphasized the importance of education through public campaigns and media literacy programs, as well as providing tools to help users better assess digital content, such as the checklist from the European Digital Media Observatory on detecting AI-generated content and the Deepfake Detection Challenge from MIT Media Lab.

Another question focused on how initiatives like IPILM could tackle the issue in communities with varying levels of digital literacy. The group emphasized the need for tailored approaches, recommending advanced tools for digitally literate audiences and more user-friendly methods for those with lower digital skills, supported by partnerships with local organizations and community leaders to ensure accessibility and cultural relevance.

The final question addressed how social media platforms can label AI-generated content without infringing on freedom of expression. The group stressed the need for clear, standardized practices that prioritize transparency and education over restrictions, ensuring that labeling informs rather than controls user behavior. The AI Regulation for Public Service Media Analysis by the European Broadcasting Union provides valuable insights into regulatory approaches that balance transparency and accountability with the protection of free expression.

Overall, the group pointed to a balance of awareness, tailored strategies, and ethical standards as crucial to managing the complex influence of AI-driven disinformation.


Our Video

Aktivieren Sie JavaScript um das Video zu sehen.
https://www.youtube.com/watch?v=SCZnniTrBDw

Further reading & information

European Digital Media Observatory. (2024, April 5). Tips for users to detect AI-generated content. Retrieved from https://edmo.eu/publications/tips-for-users-to-detect-ai-generated-content/ (last access: 15/01/2025).

MIT Media Lab. (n.d.). Detect DeepFakes: How to counteract misinformation created by AI. Retrieved from https://www.media.mit.edu/projects/detect-fakes/overview/ (last access: 15/01/2025).

Wistehube, S. (2024, September 13). AI regulation: Are public service media’s needs being met? European Broadcasting Union. Retrieved from https://www.ebu.ch/guides/open/report/ai-regulation-public-service-media-analysis (last access: 15/01/2025).

  1. Image credit: cyano66/iStock, Audience following fake news on television. Uploaded on March 12, 2024, Italy. https://www.istockphoto.com/de/foto/audience-following-fake-news-on-television-gm2062674413-564020853 ↩︎
  2. Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation.
    Data & Policy, 3, e32. https://www.cambridge.org/core/services/aop-cambridge-core/content/view/7C4BF6CA35184F149143DE968FC4C3B6/S2632324921000201a.pdf/the-role-of-artificial-intelligence-in-disinformation.pdf (last access: 15/01/2025). ↩︎

« Older posts Newer posts »