Ahead of the AI’s misinformation curve: let’s talk visual AI literacy

This exciting and timely initiative addresses the recent advancements in visual generative AI which allow anyone to create highly realistic deepfakes, leading to misinformation and personal and social harms, and cementing stereotypes and representational biases. Led by Adi Kuntsman and Jessica Elias from DISC (Digital Society Research Group), in collaboration with the AI Artist and researcher, Dr Sam Martin, the project explores these issues in dialogue with national and international experts in digital literacy, misinformation and technological challenges, and shares AI literacy tools with local communities across Manchester and beyond.

This exciting and timely initiative addresses the recent advancements in visual generative AI which allow anyone to create highly realistic deepfakes, leading to misinformation and personal and social harms, and cementing stereotypes and representational biases. Led by Adi Kuntsman and Jessica Elias from DISC (Digital Society Research Group), in collaboration with the AI Artist and researcher, Dr Sam Martin, the project explores these issues in dialogue with national and international experts in digital literacy, misinformation and technological challenges, and shares AI literacy tools with local communities across Manchester and beyond.

With recent advances in visual artificial intelligence (AI), it’s now possible for anyone to create highly realistic images and videos—sometimes called “deepfakes”—that can be very hard to tell apart from real ones. While these tools offer exciting new ways to create art, tell stories, and make information more engaging, they also carry serious risks. Deepfakes and other AI-generated visuals can be used to spread false information, create fear or confusion, and even stir up division between communities, often without people realising that the images are not authentic, but AI-generated. As it becomes harder to tell what is real and what is fake, communities may struggle to trust what they see online, making it easier for misinformation to spread and harder for people to find common ground.

In response, we are launching a community-focused project across Manchester, to support a clearer understanding of what visual AI is, how it works, and what it means for our everyday lives. Through open conversations and interactive workshops, we’ll explore both the positive uses of this technology, such as creating accessible educational materials, and the potential harms it can bring, including misinformation and bias. We are also developing a practical, easy-to-use online toolkit that community members can use to spot manipulated content, promote digital responsibility, and share trusted information. Participants will learn how to recognise AI-generated content, ask critical questions, and respond thoughtfully when faced with unfamiliar or misleading visuals.

Our aim is to ensure that people feel informed and empowered, rather than afraid, as these technologies become better understood in our communities and online spaces. The project draws on the research, carried out at our Digital Society (DISC) research group and our new “AI Literacy” initiative. The project is led by Dr Adi Kuntsman and Ms Jessica Elias who both have a substantial research record in studying social and cultural impacts of new technologies, and the AI artist and data analyst, Dr Sam Martin, who has extensive experience with generative AI technologies, AI literacy, and ethical use of AI in social research.

Friday 30 May 2025, 1:00- 2:30pm, Public Panel Online

Other Events

Launch of online visual AI literacy tool

Wednesday 25 June 2025, time tbc

Community engagement events with Manchester Cathedral

June-July 2025