AI, LLMs and Libraries, Oh My

Investigations into how AI and LLMs are affecting libraries, librarians, and their users provide important insights for internet librarians. 


Several presentations during the 2025 ASIS&T conference, its 88th, held in Crystal City, Virginia, USA, focussed on how the various forms of artificial intelligence (AI) and Large Language Models (LLMs) demonstrate the technologies’ impact on libraries worldwide. A very international gathering, with almost 600 delegates from 23 countries, the papers reflected the global spread of AI. Over 60 had AI in the title.

In a session titled "The AI Revolution in Libraries", Ognenere (Gabriel) Salubi looked at AI misinformation, which he termed information disorder. AI accelerates all forms of information disorder, examples being deepfakes, impersonations and psychological manipulation. Libraries sit at the frontline of trust, but we need a new framework that goes beyond traditional information literacy. With the constantly, and rapidly, evolving nature of AI technologies, trying to deal with all the new challenges this presents resembles Whack-A-Mole.

Salubi’s study of news sources revealed six themes related to misinformation. 1) Fake news and synthetic narratives. 2) Deepfakes and synthetic media. 3) Impersonations and identity fraud. 4) Amplification and bot-driven misinformation. 5) Liar’s dividend and erosion of trust. 6) Political propaganda and narrative control. To cope with these themes, librarians must develop new skills around AI literacy and ethical evaluation, and concentrate on becoming verification partners, not simply curators.

AI Revolutions in Regulation and Reference

Ian Y. Song spoke about putting information professionals on the map of human-centred AI. Concentrating on the digital records management and digital preservation roles of Info pros, Song and his co-author Sherry L. Xie framed their discussion in terms of the 2024 EU Artificial Intelligence Act (AIA)  but noted that our profession has a history of weak policy engagement and no clear rationale for involvement in AI lawmaking, making human-centred AI ephemeral.

What about GenAI’s effect on reference services? That was the topic of a talk by José Aguiñaga. With co-authors Norman Mooradian, Souvick Ghosh, and Darra Hofman, they found new possibilities for personalized, virtual reference services and the potential for enhancing community building. Conversational assistants could replicate many of the interactions currently performed by human reference librarians. Of course, the integration of GenAI into reference services requires an awareness of ethical issues and the possibility of misinformation from hallucinations.

Libraries and Language Models

As a component of AI, LLMs came in for some scrutiny but not to the level that GenAI did. What interested me was the diversity in how researchers considered the uses of language models. Several discussed using LLMs to do content analyses for very specific topic areas, such as TikTok Videos on palliative care and motivated reasoning on climate change.

As a productivity enhancer, Hannah Moutran, Devon Murphy, Karina Sánchez, Willem Borkgren, Pierce Meyer Katie, and Josh Conrad reported on their assessment of seven LLMs for metadata for and transcription of an architectural archive. They used a prompt design framework and API Python scripts. In working with archival information, they confronted the issue of harmful and offensive language, which was acceptable at the time, but not today. Ideally, the model flags these for review.

Another productivity enhancer is the ability of LLMs to generate abstracts for research publications. The whole idea a well-written abstract is to capture the important key points of a manuscript in the hopes that it will entice scholars to read the entire paper. Yumi Kim, Jongwook Lee, and Seungwon Yang analyzed abstracts from last year’s ASIS&T conference and found that AI-generated abstracts came out ahead. That made me wonder how many of this year’s abstracts were written by chatbots.

Perceptions, Practices and the Future of Scholarly Work

A panel discussion on Libraries in the Age of LLMs: Perceptions, Practices, and the Future of Scholarly Work yielded insights from panellists Yuan Li, Haihua Chen, Brady Lund, Rongqian Ma, Le Yang, and Miriam Sweeney as well as a very vocal audience. The three major topics were how AI is shifting what librarians do, what users do, and how ethics and critical engagement play into these shifts.

Comments I found thought-provoking included;

  • "GenAI is the new Clippy"
  • "There’s a public distrust of information and a destabilization of libraries"
  • "The U.S. situation is different from other countries"
  • "Engaging with technology is political work"
  • "Why aren’t librarians building language models?"

Uses of AI and LLMs included writing internal documents such as performance reviews and analysis of spreadsheet data. One panelist said he uses it to plan personal travel—but checks to make sure the hotels the chatbot recommends are real. We hear a lot about student use of AI, yet not all of them are entranced by the technology. Library school students seem particularly reluctant to adopt AI. One librarian in the audience commented that students at her institution were afraid to use it because it might be considered cheating and get them expelled.

We are not accustomed to computers that give us different answers each time we ask the same question. Yet the same thing happens with people. One suggestion was to ask multiple LLMs and compare the results.

Some words of caution: Recognize that engagement with different tools requires different techniques. We need training on prompting well and we need to train others in using AI responsibly. AI technologies are changing human behaviours, not just ours, so flexibility is key.

The Proceedings of the 2025 ASIS&T annual conference have been published online by Wiley. They are free to read, but not available for downloading or printing.