AI challenges to information literacy

The world is now embroiled in intense discussions about ChatGPT. Canadian librarian Bill Badke adds his comments, from a library, information literacy and educator perspective. He looks at unproductive approaches to students' use of AI tools, worries about the future of research projects in academia and notes how AI-generated text could benefit international students.

Already, major news outlets are marvelling at ChatGPT’s ability to take a few textual prompts and turn them into an essay, poem, email, short story, research paper or whatever. You can ask it questions and it answers, all without any human input on its side. Information professionals know, of course, that the app is crunching through masses of digital knowledge and using text-matching in an amazing way to generate “intelligent” content, yet it seems almost human. And even more such tools are coming soon.

AI as a generator of knowledge has been creeping up on us for a long time. Think Siri or Alexa. Think bots that spew out disinformation on Twitter. Some people view the evolution of knowledge AI as a great boon that will free us from the many hours required to write things. Others see it as the death of the university essay, a potential path to the spread of even more shabby information production, or even a security risk in the hands of cybercriminals.

Dismissing unproductive approaches

 It’s all too easy to go down useless paths when looking at implications for information literacy. Let’s first get some of those unproductive approaches out of the way:

  • Content generation AI is still quite primitive and prone to error.

Sure, it is. When you can create a paper that looks like the work of a first or second year college student, certainly there is high risk that the traditional higher education essay assignment will disappear. Agreed, but look how far this has come in the past 20 years. Though basic, the products resulting from ChatGPT or other apps are really amazing because they are getting so close to human. We can’t dismiss AI because it is undeveloped. Such tools will get better and better until you will be hard pressed to distinguish an AI project from a human one.

  • Plagiarism detectors will do the screening for us.

AI generated content is plagiarism (now being called AIgiarism), so let’s run it by our vetting tools. Sadly, if you do, you’ll quickly discover that your efforts will pretty much always fail. AI generation doesn’t steal text from existing sources. It creates its own from the immensely huge body of words, phrases, and ideas living on the Web. So, it’s more akin to “I got my friend to write my essay” kind of plagiarism than “I copied a website.” As such, it is difficult to detect, though Turnitin claims it can do so ( and there are apps attempting to catch AI writing (, including a “text classifier” from OpenAI itself ( Since AI gets better over time, it will become increasingly less detectable. OpenAI has quietly taken down its own AI detector due to low accuracy.

  • We educators should just ignore it. How many students are up on AI tools anyway?

Classic error. Students find everything, especially if it has promise to make their research and writing tasks easier. With the publicity these days about ChatGPT, it’s foolhardy to believe that students won’t discover text-generation AI. They already have.

  • Students know better than to shortcut their education with tools that do the work for them.

This one is ridiculous. Let me tell you what I know from over 3 decades of experience with students in higher education. There are keeners and there are slackers and there are a vast host of students in between who are just trying to survive long enough to get a degree. If an undetectable tool can help them produce some content, why not use it?

You see, most students see research and writing as tasks lacking much educational purpose. Why? Because if such activity were educationally valuable, professors would spend more time training and guiding them to do it well. Instead, it’s a game. The way you play the game is to figure out what the Prof wants and deliver that. All you need is a product that passes muster, so a tool that can help you do that could become fair game for the struggling or desperate.

The demise of the research project?

Some educators over past decades have argued that student research projects should be scrapped because students do them poorly and there are better ways to further education. Now, if AI is going to write these projects for our students, then research assignments as an educational tool could be dead. We can’t fight the apps that do an end run around any semblance of compelling our students to think through a research problem, find the resources, and write up a cogent, critical presentation.

But I don’t want to go down the road of abandoning research essays. The student research project offers a near-perfect opportunity to define a problem, gather evidence, interact within the “conversation,” make data-driven conclusions, and persuade a reader that your arguments are sound. It develops student ability to clarify issues, encounter opposing views, wrestle through the numerous difficulties, and finish strongly. Those skills are the essence of much of life, and the loss of them in education would be a terrible thing indeed.

How do we reckon with AI?

We can’t take advice from the Terminator movies and simply wipe out the robots. AI is here to stay. What is more, it will likely get good enough to fool most of us. We can try to ban it, but in the world of the Internet, containment is not an option.

We could ramp up our tools to detect AI text, much as cybersecurity has done with hackers and ransomware. But this means we will always be playing catch-up, never able to conquer the constant development of better AI. If you think there is a way to thwart computer text generation, you are not really paying attention to how far this has advanced. How do you determine whether AI wrote something like this?

In order to help Arctic foxes adapt to climate change, conservation efforts are needed to reduce their vulnerability. This includes protecting Arctic fox habitat, regulating hunting and trapping, and reducing pollution in their habitats. (Do I cite ChatGPT? Not sure.)

Let me suggest a different approach that could revolutionize the way we conceive of the student research project.

It’s not really about the content

I believe we’ve missed the educational point of research assignments. How many student projects will ever be publishable or have a significant impact on scholarship? Research assignments are not just about the content, as if writing a good one means you had achieved a high educational goal. The goal is not producing good content but enabling students to become skilled researchers and critical thinkers. Surely there are ways to do this while limiting the ability of students to deceive us with AI.

In my view, the best option is to turn research projects into training exercises, breaking them down into several chunks, each of which focusses on particular research skills and becomes the means for professors to provide training through comments and rewrites. I’ve written about this often (see my Teaching Research Processes: The Faculty Role in the Development of Skilled Student Researchers. 2nd ed. EnRoute, 2021). It has been the focus of my undergraduate and graduate credit research courses   (;

A faceted or scaffolded research project design works. How do I know? Because I’ve been teaching it for 38 years, and I’ve seen thousands of students move from basic skill levels to intermediate or advanced within one semester. If such a restructuring of the research assignment were common through the curriculum, we would be producing students who are savvy researchers, able to identify problems, tackle them with evidence, and write persuasively.

It's not really about a polished final product that a bot can deliver. It’s about the skills students develop and enlist in creating that product.

But what about the content?

Recently, one of my professor colleagues, who I admire greatly, wrote me an email:

“Therefore, it seems inevitable that within a very few years some level of ChatGPT-like capabilities—probably considerably better than even what we see today—will be built directly into Microsoft Word. It will become a native function of the standard writing tool that students, indeed all of us, will use routinely, available at the click of a mouse.”

His particular interest is with international students who, despite good ideas and solid research, are limited in their ability to express their research results in clearly flowing English. For him, an AI tool that can clean up halting language and make it flow is just a leg-up to put such students on a par with their native-English fellow students. It’s also possible for a student to write in a native language and then use AI to translate it.

Should AI provide a leg-up to students? It seems reasonable to say, “Yes,” but we all know that language cleaning can all too easily become an opportunity for the bot to do the content-generation for us. Certainly, AI tools will be built into our word processors and will enable students, whether local or international, to fix their writing in dramatic ways, but it’s not really about the content. It’s about the educational value of walking through the research process.

The bottom line

We have a grand opportunity in a world of AI text generation to take our students down to basics, where they can learn, through faculty-mentored and faceted research assignments, how to develop their arguments, using evidence effectively and arguing cases with due regard to multiple points of view. This may become a golden age for renewed information literacy if we can empower faculty to turn student research projects into training vehicles. It’s about the process and far less about the product.

This will be a tough go, however. Busy faculty members, generally not giving a lot of thought to student research skill development, may resist the curricular changes and additional work of mentoring students through their research. Yet, a lot of my faculty colleagues have been startled, even frightened, by ChatGPT. They are open to guidance. It’s time for librarians to get in there and offer support for a new way of doing things. If we hesitate, the opportunity may disappear.

William Badke ( is associate librarian at Trinity Western University and the author of Research Strategies: Finding Your Way Through the Information Fog, 7th Edition (, 2021). This article is adapted from his InfoLit Land column, published in the April 2023 issue of Computers in Libraries. To subscribe to the magazine (in print or as a PDF) go here