Uncommon Courses is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching.
AI Literacy and Building Resilience to Misinformation
As an associate director of a college library, I’ve watched artificial intelligence technologies become commonplace in society. They help shape our media. They influence our social interactions.
And they’re also reshaping education.
Through conversations with colleagues and students, I discovered an urgent need: a course that demystifies AI and provides students with tools to navigate a rapidly evolving digital landscape.
This need is relevant today given the increasing prevalence of online misinformation.
AI-driven social media algorithms – used by Facebook and TikTok, for example – and content generation tools like ChatGPT can amplify certain voices while obscuring others.
Those using AI tools maliciously can also create entirely false content, such as deepfake videos or misleading AI-generated news articles. By understanding these dynamics, students can become more discerning consumers and responsible users of information.
I worked with faculty member Michael Griffin and associate director of academic technology Tamatha Perlman to design a course that introduces students to several AI fields.
They include machine learning – how computer systems imitate the way humans learn – and deep learning, which uses artificial neural networks to learn from data.
We also delve into generative AI – a type of AI that can produce images, videos and other forms of data – and prompt engineering, which designs prompts to guide AI models.
The course explores two themes: AI literacy and building resilience to misinformation.
Students learn AI technologies such as natural language processing, which allows machines to understand and generate human language, and generative AI. They explore how these tools influence the ways information is created, shared and interpreted.
We then delve into the ethical implications of AI, from data privacy to bias and algorithmic transparency – the principle of making AI decision-making processes understandable and open for review.
The idea is to foster a nuanced understanding of AI’s potential benefits. One example is AI tools that personalize educational content by adapting lessons to a student’s learning pace and style.
We also examine its potential pitfalls. Some AI hiring tools, for example, have discriminated against specific demographic groups, such as systems that disproportionately rejected women’s resumes for technical jobs.
The course also explores cognitive biases, or systematic patterns of deviation from rationality in judgment, which can make people more susceptible to misinformation.
We look at confirmation bias, the inclination to search for information that supports one’s preexisting beliefs. We also examine recency effect, the tendency to give more weight to recent information over earlier data.
Students experiment with AI tools such as ChatGPT, Gemini and NotebookLM. They do so to examine misinformation case studies and participate in discussions on some complex questions.
They include: When does AI assist in learning? When does it hinder learning? How can AI be used more responsibly? How can we know when it’s being manipulated?
AI tools are increasingly embedded in social media and news content. This makes it critical for students to discern credible sources from misleading content.
As AI technologies evolve, so too do the methods for spreading misinformation.
They include AI-generated images and synthetic media, which is digitally created or altered content designed to appear authentic.
All of these technologies can be difficult to identify and authenticate. This course gives students the tools to make informed decisions in a digital age.
Many students are surprised to learn that AI-powered platforms tailor content to match their interests.
For example, watching a series of videos on a particular topic can lead to being shown increasingly similar content, reinforcing existing beliefs. This, in turn, can shape perceptions and distort reality.
To address this, we introduce students to practical techniques for broadening their information sources. They also learn to cross-reference facts and scrutinize AI-curated content.
For instance, we practice a technique called “lateral reading,” where students verify information by examining multiple sources simultaneously.
UNESCO’s Media and Information Literacy Curriculum – E-version inspired our syllabus.
Besides academic journal articles, we draw extensively from articles and videos published by The New York Times, The Washington Post and other major news outlets to analyze misinformation stories. These sources offer ample real-life examples, enabling students to engage with timely and relevant case studies.
We also review the AI Competency Framework for Students and the AI Competency Framework for Teachers, launched by UNESCO in September 2024. These frameworks provide valuable insights into fostering AI literacy and ethical engagement with AI technologies.
The goal is to empower students to approach digital information with a critical and informed mindset. This will position them as responsible citizens in a world increasingly shaped by AI.
The course will also help students feel more confident when identifying credible sources, cross-checking information and making sense of AI-powered content. These skills will serve students well in their academic and personal lives.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Mozhdeh Khodarahmi, Macalester College
Read more: Librarians help students navigate an age of misinformation – but schools are cutting their numbers Verifying facts in the age of AI – librarians offer 5 strategies An 83-year-old short story by Borges portends a bleak future for the internet
Mozhdeh Khodarahmi does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.