Looking for a new gig? What about becoming a highly-paid prompt engineer?

Librarians' skill sets align perfectly with the job requirements for prompt engineering


The sudden entry of generative AI onto the public stage and its conversational approach to search has created a new job category—prompt engineering. No degree programme prepares people for this job, nor is there a certification path, although Corsera offers at least one course on it. Yet librarians seem supremely qualified to fill the positions.

Anthropic caught everyone’s attention when it advertised for a "prompt engineer and librarian" with a salary range of $175,000 to $335,000 (€160,000 to €308,000). What does a prompt engineer actually do? Bloomberg called it being an “AI Whisperer.”   https://www.bloomberg.com/news/articles/2023-03-29/ai-chatgpt-related-prompt-engineer-jobs-pay-up-to-335-000#xj4y7vzkg

Anthropic, in its job ad, is not terribly specific. It lists representative projects as:

“• Discover, test, and document best practices for a wide range of tasks relevant to our customers.

  • Build up a library of high quality prompts or prompt chains to accomplish a variety of tasks, with an easy guide to help users search for the one that meets their needs.
  • Build a set of tutorials and interactive tools that teach the art of prompt engineering to our customers.
  • Work with large enterprise customers on their prompting strategies.”

Wikipedia defines prompt engineering as “a concept in artificial intelligence, particularly natural language processing. In prompt engineering, the description of the task that the AI is supposed to accomplish is embedded in the input, e.g. as a question, instead of it being explicitly given. Prompt engineering typically works by converting one or more tasks to a prompt-based dataset and training a language model with what has been called "prompt-based learning" or just "prompt learning".”

From a librarian point of view, it seems much like an accelerated version of a reference interview, although instead of interviewing one specific person it’s looking at the terms multiple people might enter when they are using generational AI, combined with the creation of a LibGuide. It’s playing with potential words and sentences that can cause the Large Language Model (LLM) to deliver the best answers to queries. If ChatGPT, as some say, is autocorrect on steroids, then librarians are the human who oversees the autocorrect, making sure it doesn’t autocorrect incorrectly.

Creating good prompts

 Even librarians uninterested in applying for jobs as prompt engineers are experimenting with prompts. They may be want inspiration for writing a library brochure, a blog post, a programme announcement, or new service announcement. They could also consult with ChatGPT to help writing internal memos or annual reports. They can turn to ChatGPT for explanations of just about any topic under the sun, from apples to Zeus. Librarians who code can use the ChatGPT interface to LLMs as a check on their coding or to write new code. And, of course, librarians are confronting false citations and AI-written student papers. 

Librarians can be on the forefront of explaining how to create excellent prompts that will return optimal results from an LLM. They might incorporate it into bibliographic instruction or offer a seminar on the topic. Cheat sheets, guides, and tips are abundant, as a quick internet search reveals. KDNuggets has one, as does WiredE-StoryEra, and many others. Singapore-based librarian Aaron Tay has a lengthy blog post on prompt engineering and librarians that is well worth reading.

Some of the suggestions include:

  • Define exactly what you want the LLM to do, such as write an essay, brainstorm a topic, summarize an article, or explain to a child
  • Set restrictions, such as a formal tone, basic English, short sentences, scientific style, or creative tone
  • Specify a role, such as journalist, teacher, developer, or analyst
  • Specify a format for results, such as bullet points, code, summary, or table
  • Define scope and date to be covered (GPT-3 ends at 2021 but GPT-4 is current)
  • Set word count limit or number of paragraphs if appropriate
  • Specify the audience, such as children, students, entrepreneurs, musicians, or library management (Wired has a particularly humorous response to the prompt “give me a short paragraph on the potential of AI as if you were an excitable teenager”)
  • Give whatever context is necessary, such as positive or negative opinions, advanced or novice level, or country/language

Remember that content is generated afresh each time, so take advantage of the ability to regenerate the results and to converse with the chatbot to further refine results. Further, this is a rapidly changing area, so what works in prompt engineering today may be different tomorrow.

As school librarian and consultant Elizabeth Hutchinson emphasises: “It's important, however, to remember that AI and ChatGPT are just tools, and need human curation and interaction to be used effectively so no it can't do it all, yet!”