As our team at Adelphi University Libraries develops an AI roadmap, one recurring theme keeps surfacing: how do we ensure that artificial intelligence literacy—often framed around productivity, efficiency, and tool fluency—doesn’t eclipse our deeper mission of fostering critical inquiry and information literacy?
Our community, like many others, is increasingly using generative AI tools (often without realizing it). While this creates opportunities for instruction in prompt design, citation, and ethical use, it also raises questions about what kind of learning higher education should be promoting.
In libraries, we’ve long championed the values of accuracy, authority, privacy, intellectual freedom, and digital literacy—all of which are foundational to “AI literacy” as well. But I worry that current models risk reducing AI literacy to “how to use ChatGPT effectively” rather than how to think critically about what AI produces, how it shapes knowledge, and how it fits into human reasoning and research ethics. We are investigating how to implement our goal of not just to teach students how to use AI—but to help them ask why, when, and whether it should be used at all.
So, we would love to hear from others about how you are framing “AI literacy” within the broader context of information literacy or critical digital literacy.
Replies
You bring up some interesting points, Kimberly! Be sure to check out this week's guest speaker, Reed Hepler, who will be talking about some of the issues you've mentioned here. The recording will be available Thursday!
I agree with you that "elevating the conversation" around GenAI will be more helpful for students than teaching them how to use the tools, which will continue to change over time.