IPILM is a learning environment that promotes collaborative knowledge construction among students from diverse cultural backgrounds. Educators and learners from various countries take part in an intercultural learning endeavor.
Gabrielle Lerner, an alumna of an IPILM course, shares her reflections on metaliteracy.org, recounting her transformative learning journey.
Reflecting on her experience, she writes:
“About three years ago, I had the incredible opportunity to participate in this course, and it remains one of the most profound educational experiences of my life. The course, Intercultural Perspectives on Information Literacy and Metaliteracy (IPILM), brought together a diverse team of students from around the world, including Germany, Bosnia-Herzegovina, Austria, Poland, India, and the United States. Our differences enriched our collaboration, weaving a vibrant tapestry of perspectives as we explored complex questions about truth, bias, and the framing of information. I am deeply grateful for this experience and for the opportunity to engage with such a talented and thoughtful group of individuals.”
Gabrielle’s reflection highlights the unique intercultural dynamics and thought-provoking challenges that make the IPILM course an exceptional learning opportunity.
Technology changes the world, especially the art world
Artificial intelligence is becoming a vital part of our society, driving innovation and simplifying tasks across many areas of life. However, it also brings challenges such as privacy concerns and the potential for misuse. As AI-generated content becomes more common, it raises crucial ethical questions about authorship, ownership, and creativity. These issues demand thoughtful discussion on regulation and ethical use. Join us in exploring the responsibilities and implications of AI, from creative industries to everyday applications.
Welcome to the world of digital art!
As AI tools revolutionize digital art, they raise crucial ethical questions about authorship and creativity. This is just one of the many engaging topics presented at this year’s IPILM conference. The conference emphasized information literacy in handling AI-generated media, focusing on the responsible creation, dissemination, and access to these media at all levels. Understanding and navigating these digital landscapes is vital. Join us as we explore these critical issues and more.
Summary
This year’s IPILM conference focused on information literacy in dealing with AI tools. The focus was on the mediation, dissemination and access to AI-generated media types in the digital space as well as their handling on an individual, social and institutional level. The presentation addressed the effects of digitally generated art through AI. Copyright, provision and the effects on the art world were discussed. Furthermore the question arose as to how art is defined and whether this understanding needs to be transformed. A survey with 13 respondents from different cultural and academic backgrounds showed that people from the design industry are distrustful of AI because they are afraid of being replaced. At the same time, the innovative nature of AI is recognized, but it can only serve as a tool and cannot replace the artist. The importance of regulation and ethical responsibility in the use of AI tools was also emphasized. The conclusion of the presentation emphasized that GenAI is not only a complex topic for the art scene, but must also be considered from a societal perspective.
Discussion
The topic received positive feedback, reflected in numerous questions post-presentation. Key discussion points included:
Copyrights in creating digital art with GenAI tools and proper attribution.
Responsibility in using GenAI and who bears it.
Definition of art and whether artificial creations qualify.
How viewers can identify artificially generated content and the role of information literacy.
Responsibility of consumers and users of digital content.
Specific outcomes of the discussion highlighted that managing generated digital content requires media and information usage and literacy skills. Both providers and users of GenAI tools were recognized to have ethical and social responsibilities. Additionally, it was emphasized that consumers must critically evaluate their digital media consumption habits. The project seminar’s aim of teaching information literacy was clearly reflected in the discussion outcomes.
The conference session focused specifically on AI Literacy for Teacher Education, emphasizing the importance of equipping educators with the knowledge and tools to use AI effectively and responsibly in their classrooms. The discussion highlighted the role of AI literacy in teacher education and how it helps teachers navigate ethical challenges while integrating AI tools into their teaching practices.
The session included insights from an international survey of 33 educators from seven countries, exploring their experiences with AI tools like ChatGPT and Kahoot. While the survey offered some valuable insights, it should be noted that the findings were exploratory and not fully conclusive. Therefore, these results were complemented by relevant research literature, which provides a more comprehensive understanding of the role of AI literacy in teacher education
Discussion: Questions and Answers
The discussion focused on the central question of how AI can support teacher education. AI literacy was recognized as essential in helping teachers understand and navigate AI tools. It was emphasized that AI can provide personalized training materials and learning experiences tailored to individual teachers’ knowledge and needs. However, it was also made clear that AI should not be seen as a replacement for teachers but rather as a tool to enhance their teaching practices.
Further points of discussion centered on balancing the use of AI with the need to avoid over-reliance on technology. It was stressed that continuous reflection and ethical evaluation are essential to ensure that AI remains a valuable pedagogical tool. Additionally, the availability of training programs and resources, such as online courses and workshops, was recognized as critical for fostering AI literacy among educators.
Recent studies reinforce the need for AI literacy in teacher education. For example, tools like ChatGPT have been shown to help educators draft lesson plans and assessments, yet studies also point to limitations such as the potential for AI to generate incorrect or biased information. The research advocates for integrating AI literacy into teacher training programs, focusing on ethical use and aligning AI tools with educational goals.
In conclusion, the importance of AI literacy in teacher education cannot be overstated. Equipping educators with the necessary skills to use AI responsibly and effectively will help them enhance their teaching practices while ensuring ethical standards are upheld. The goal should always be to empower educators in the use of AI, fostering critical thinking and responsible technology use within teacher education programs.
Our Literature
Owan, V. J., Abang, K. B., Idika, D. O., Etta, E. O., & Bassey, B. A. (2023). Exploring the potential of artificial intelligence tools in educational measurement and assessment. Eurasia Journal of Mathematics, Science and Technology Education, 19(8), em2307. https://www.ejmste.com/article/exploring-the-potential-of-artificial-intelligence-tools-in-educational-measurement-and-assessment-13428
Whalen, J., & Mouza, C. (2023). ChatGPT: challenges, opportunities, and implications for teacher education. Contemporary Issues in Technology and Teacher Education, 23(1), 1-23. https://www.learntechlib.org/p/222408/
Sperling, K., Stenberg, C. J., McGrath, C., Åkerfeldt, A., Heintz, F., & Stenliden, L. (2024). In search of artificial intelligence (AI) literacy in Teacher Education: A scoping review. Computers and Education Open, 100169. https://www.sciencedirect.com/science/article/pii/S2666557324000107
Ding, A. C. E., Shi, L., Yang, H., & Choi, I. (2024). Enhancing teacher AI literacy and integration through different types of cases in teacher professional development. Computers and Education Open, 6, 100178. https://www.sciencedirect.com/science/article/pii/S2666557324000193
Conference Session Report: 6th IPILM-Conference on 12.12.2024
Image credit: cyano66/iStock, Audience following fake news on television. Uploaded on March 12, 2024, Italy. 1
Summary
The presentation explored the dual impact of AI on democracy, highlighting its transformative potential but also significant risks. While AI drives innovation, it also enables disinformation and misinformation through deepfakes and fake content and undermines trust in institutions and public opinion. A self-conducted survey of 40 students found that the impact on political opinion is limited, but trust in democratic systems is slightly weakened.
It also indicated that public awareness of AI-generated disinformation is low, highlighting the urgent need for education and regulation. Case studies, including the US and EU elections, showed how difficult it is to detect disinformation and assess its impact. The presentation concluded with a call for ethical AI practices, stronger regulation and public awareness efforts to preserve democratic integrity in the face of evolving technological challenges.
Discussion
Following the presentation, the discussion addressed several questions on AI-driven disinformation, including the following:
An audience member asked how to enhance awareness of its impact on public opinions and trust in democracy. The group emphasized the importance of education through public campaigns and media literacy programs, as well as providing tools to help users better assess digital content, such as the checklist from the European Digital Media Observatory on detecting AI-generated content and the Deepfake Detection Challenge from MIT Media Lab.
“Disinformation is well a long-standing problem, but given that AI techniques present in the digital ecosystem create new possibilities to manipulate individuals effectively and at scale, multiple ethical concerns arise or are exacerbated.” (Bontridder, N., & Poullet, Y. 2021:5) 2
Another question focused on how initiatives like IPILM could tackle the issue in communities with varying levels of digital literacy. The group emphasized the need for tailored approaches, recommending advanced tools for digitally literate audiences and more user-friendly methods for those with lower digital skills, supported by partnerships with local organizations and community leaders to ensure accessibility and cultural relevance.
The final question addressed how social media platforms can label AI-generated content without infringing on freedom of expression. The group stressed the need for clear, standardized practices that prioritize transparency and education over restrictions, ensuring that labeling informs rather than controls user behavior. The AI Regulation for Public Service Media Analysis by the European Broadcasting Union provides valuable insights into regulatory approaches that balance transparency and accountability with the protection of free expression.
Overall, the group pointed to a balance of awareness, tailored strategies, and ethical standards as crucial to managing the complex influence of AI-driven disinformation.
“Cherokee Syllabary”, Kaldari, CC0, via Wikimedia Common
Group 1: Session report on the IPILM-conference
Introduction
The presentation explored the impact of AI on local culture, focusing on the aspect of language. It provided an overview of Natural Language Processing (NLP) and its applications in language technology, including machine translation, speech synthesis and speech recognition.
In the presentation, the underrepresentation of minority languages and the dependence on English-language texts were highlighted. The focus of the presentation was placed on the role of Artificial Intelligence in language preservation and documentation, particularly for endangered languages like Cherokee. In addition to technological advances, challenges such as ethical considerations and cultural biases were addressed.
The case study on the Cherokee language showed practical approaches to language revitalization and the importance of balancing technological innovation with cultural authenticity.
Discussion
The discussion addressed critical questions about making AI tools accessible to underrepresented communities, beyond researchers and developers. The group emphasized the need for user-friendly platforms that cater to different levels of technological expertise. Efforts should also be made to make these tools available in different languages, particularly for communities without advanced digital infrastructure.
Next, the participants explored how technology aids in documenting and revitalizing lesser-known languages. The group emphasized that AI, especially natural language processing (NLP), was important for documenting and revitalizing lesser-known languages by analyzing linguistic data to create digital archives, educational resources, and language learning apps.
A specific question focused on the role of the Cherokee community in developing AI tools for their language. In our opinion, the Cherokee community should have led the development of AI tools to preserve the cultural and linguistic integrity of their language, ensuring the decolonization of research.
Another important question delved into whether AI efforts in language preservation also represented different dialects and regional variations, to avoid promoting a homogenized culture. AI efforts in language preservation should recognize and represent regional dialects and variations to avoid creating a homogenized culture. This can be achieved by training AI models on diverse linguistic data that includes various dialects, regional accents, and cultural nuances.
Further information
If you are interested in learning more about Cherokee, you can visit the Cherokee nation website and watch the following video:
Zhang, Kexun; Choi, Yee; Song, Zhenqiao; He, Taiqi; Wang, William; Li, Lei. (2024): Hire a Linguist!: Learning Endangered Languages in LLMs with In-Context Linguistic Descriptions, 15654-15669.
This study explores how the innovative LINGOLLM project enables large language models (LLMs) to work with endangered languages by incorporating linguistic descriptions such as dictionaries and grammar books. This approach enhances AI’s translation and processing capabilities for rare and endangered languages.
Zaki, Muhammad Zayyanu; Ahmed, Umar (2024): Bridging linguistic divides: The impact of ai-powered translation systems on communication equity and inclusion. In: Journal of translation and language studies (5.2), 20-30.
This article explores how AI-powered translation systems influence communication equity and inclusion, addressing challenges such as linguistic bias and accessibility for marginalized communities.
Zhang, Shiyue; Frey, Ben; Bansal, Mohit (2022): How can NLP Help Revitalize Endangered Languages? A Case Study and Roadmap for the Cherokee Language. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 1529- 1541.
The study explores how Natural Language Processing (NLP) can aid in revitalizing endangered languages, with a focus on Cherokee. It examines the language’s current state, resources, and challenges, and suggests strategies for collaboration with indigenous communities.
Our presentation
Below, you’ll find the screencast of the presentation on “AI Impact on Culture: Language”:
The conference session of group 2 focused on AI impact on local culture: Cultural perception of AI regulations and how AI influences cultural values and ethics focusing on regulatory and governance challenges. It focuses on the challenges of regulating AI and adapting governance frameworks to align with local cultural perceptions. An overview of the governance challenges posed by the rapid evolution of AI, as well as its ethical and regulatory implications, was presented. Moreover, it emphasizes that AI systems must be developed and deployed with principles like transparency, accountability, and fairness to address these challenges effectively.
Insights from a survey conducted across Germany, Poland, Bosnia, and India which illustrated the complex relationship between AI’s global development and its local cultural impacts. This underscores the importance of integrating cultural values into regulatory frameworks.
Discussion: Question and answers
The discussion addressed several questions about approached towards AI regulations as well as questions on the survey that was conducted for the presentation. One question was based on the group’s opinion on a universal AI regulation approach and its challenges. The group emphasized that creating a universal framework is very complex and even impossible. The reason is that each culture has their own criteria to consider when developing such regulations. In addition, an example as South Americans might take an example of the EU AI Act but have to adapt this framework to their own unique cultural landscape.
Another question was about the conducted survey and any surprising results. The group mentioned that in regard to their own country, many answers from the survey were not surprising but rather expected. This for example included answers from German participants and their focus on data privacy in their answers.
Our screencast
Below, you can find the screencast that group 2 created for this topic.
Further information about AI impact on local culture: Cultural perception of AI
For more information, group 2 gathered a few resources for you:
Academic paper: Managing the race to the moon: Global policy and governance in artificial intelligence regulation—A contemporary overview and an analysis of socioeconomic consequences (2024) by Ge et al.
This paper covers more regions and their cultural perception of AI and its current development on AI regulations
Ge, X., Xu, C., Misaki, D., Markus, H. R., & Tsai, J. L. (2024). How culture shapes what people want from AI. Association for Computing Machinery. Walter, Y. (2024). Managing the race to the moon: Global policy and governance in artificial intelligence regulation—A contemporary overview and an analysis of socioeconomic consequences. Journal of Global Policy and Technology, 5(2), 45-67.
YouTube video: The EU’s AI Act Explained
The EU has rolled out the world’s first AI regulation, classifying AI into four risk levels with tailored rules for each. This video explains the EU’S AI Act in a short and compact way.
(DataCamp. (2024, May 16). Understanding AI Regulations Across the World | Experts Share All [Video]. YouTube. Retrieved January 16, 2025, from https://www.youtube.com/watch?v=JCFao04h1ZM)