Reimagining human intelligence in the age of AI

Information professionals now confront the future of knowledge where AI reshapes human work and brings trust into question.


Perspectives on reimagining human intelligence in the age of AI come from the three keynote speakers at the Computers in Libraries conference held in the Washington DC area in March 2026. All started from the premise that AI is not a passing fad, not passive, not something to be ignored, and definitely not of limited influence on human behaviour. They looked particularly at the effect of AI on research, knowledge systems, and trust. Although they had slightly different takes on how librarians and other information professionals will be affected, they agreed that learning how to navigate this new human-AI frontier is urgent.

Starting off the first day of the conference was Annie Green, author of Diary of HI & AI: The Nexus of Human Intelligence and Artificial Intelligence, who expanded on in her talk titled Human Intelligence, AI, Learning, & Supporting Communities. Daniel Russell, Free Range Research Scientist; and Google Principal Scientist UX Researcher, Google Labs; Stanford University; University of Zurich; and author of The Joy of Search, keynoted the second day, concentrating on information seeking behaviours in his talk on AI, Search, & the Future for Information-Finding Experiences. On the final day of the conference, we heard from Lee Rainie, now at Elon University as Director, Imagining the Digital Future Center, and former Director, Pew Research Center, who chose Humans, AI, Bots, & Engaging With Information as his topic.

One common theme was that the human-AI partnership is already irreversible. But how librarians and knowledge managers handle this is still being negotiated. Green argued that we must deliberately engineer which tasks machines replace versus which they enhance, while Russell proposed the metaphor of AI as intern, a helpmeet in the research process. It’s not a completely positive view, as he pointed to problems when AI-written books (AI slop) enter online book stores (but hopefully not library collections). Rainie was a bit more sceptical, voicing concerns about the possibility of AI sentience and agents talking to each other about self-preservation.

Speed, Scale, and Trust

Speed and scale breaking the old research and knowledge ecosystem was a concept all three grappled with. Development cycles that used to take weeks or even years have been compressed into hours. The amount of destabilization resulting from this is particularly visible in information-heavy institutions and rely on trusted information. That trust is now in question. Green noted that processes built for a world where knowledge moved slowly must transition to real-time, neural-network style organizational intelligence. Rainie’s attention was on scholarly publishing, in terms of length of time to publish, fewer peer reviewers, and AI slop. Overall, the infrastructure of knowledge, from creation all the way through to distribution and trust, is under enormous pressure. Internet traffic is increasingly bot-to-bot rather than human-to-human.

Linked to speed and scale is concern about data integrity, truth, and trust, which all three speakers acknowledged is in crisis. A deep anxiety about what happens to truth when AI generates content and AI agents rather than humans determine what data is trustworthy was evident. Green investigated "tainted data" and made the case for "algorithmic ethics," where the character of those building the systems is as important as the systems themselves. Russell gave examples of different AI models returned different answers to the same image, emphasizing the importance of verifying AI outputs, not taking them at face value. Rainie’s scepticism resurfaced as he showed examples of AI deception (he called them "funky-freaky things bots are doing"), from ignoring safety considerations, faking alignment, bluffing, feinting, and misrepresenting preferences in a negotiation. Bots aren’t always so sneaky. Russell had a more positive view, mentioning bots as meeting note-takers and finding new research uses for agentic and generative AI, such as locating a particular book on crowded and somewhat chaotic book shelves.

Moving from Gatekeepers to Navigators

What new roles should librarians and information professionals be considering as AI becomes increasingly dominant? With the inevitability of AI in the realm of knowledge and information, it’s important to give clear directions to AI agents. When it comes to search, as Russell pointed out, we’ve moved from keyword searching to prompting. Keep in mind that excellence in prompting may morph as well, however, Green called librarians the stewards of organizational memory, intelligence architecture, and responsible AI integration. Adopting a digital mindset is essential. Russell emphasized that students and library patrons still need direction for their research. "It takes drive and initiative to go the extra step and take action," he said, stressing that agency is greater than intelligence and asking the right question is necessary for obtaining good answers. Concrete roles going forward, as articulated by Rainie, include libraries as centres for AI literacy, human-AI collaboration spaces, preservers of community memory and underrepresented voices, curators of vetted knowledge, and ethics stewards. Critical thinking, human verification, and institutional trust-brokering are vastly more valuable in today’s AI environment.

None of the speakers believe that libraries or human knowledge are obsolete. However, changing workflows, behaviours, and technological advances will have a serious impact on libraries and other knowledge institutions, along with the humans who work there. Critical thinking remains a human-centric skill in an AI world and scepticism should be encouraged. We need human agency to overcome lack of trust and to support the broader social and institutional ecosystem. Despite the promise of AI, human intelligence is needed more than ever.

When looking at the future of knowledge and trust, information professionals with a digital mindset taking on leadership roles will help reframe and reconceptualize their organizations. Realities will change; boundaries will shift. Information professionals cannot afford to stand back and simply observe these changes. The challenge is to adapt or atrophy. Russell probably had the best advice for our future, inevitable collaborative relationships with AI: Stay curious, stay human, don’t outsource what you love.