Artificial Intelligence, the EU, Libraries and You

The European Union have proposed new rules and actions regarding Artificial Intelligence, in line with its desire to make "Europe fit for the Digital Age". Although most commentary regarding the proposed AI regulations have centred on "Big Tech", the library community is also affected. Library organisations, particularly IFLA, have been interested in issues surrounding new technologies such as AI for several years.


The proposed EU regulations establish risk-based levels for AI systems to foster an environment of trustworthy AI and ensure the safety and rights of EU citizens. The risk levels are:

  • Unacceptable risk: Systems that manipulate human behaviour against their will and post a clear threat to people's safety, livelihoods and rights will be banned. One example is toys that use voice assistance to encourage minors toward dangerous behaviours and another is 'social scoring' by governments.
  • High risk: Systems involved with critical infrastructures; educational or vocational training; safety components of products; essential private and public services; law enforcement; migration, asylum and border control management; and administration of justice and democratic processes. These will need to meet strict obligations before they can be put on the market.
  • Limited risk: Systems that are transparent, such as chatbots that make it obvious people are interacting with a machine.
  • Minimal risk: The vast majority of AI systems fall into this category and the EU does not intend to intervene with these.

The EU's proposal supports the creation of a European Artificial Intelligence Board to help with governance.

Reactions to regulating AI

The Brookings Institute, a U.S. think tank, agrees with the need to curb problems with AI, stating "the need to adopt a legal framework on artificial intelligence appears crucial. Indeed, AI systems have shown in several cases to have severe limitations, such as an Amazon recruiting system that discriminated against women, or a recent accident involving a Tesla car driving in Autopilot mode that caused the death of two men.

AlgorithmWatch applauds the EU action but criticizes it for not including "a ban on biometric mass surveillance practices as part of the Reclaim Your Face campaign". It also finds deficiencies in the classification and assessment of high-risk AI practices.

Mozilla's opinion, in its Open Policy and Advocacy blog, is that it welcomes the initiative to rein in the potential harms caused by AI but is waiting for further clarification. "We are therefore encouraged by the introduction of user-facing transparency obligations – for example for chatbots or so-called deepfakes – as well as a public register for high-risk AI systems in the European Commission’s proposal".

The information industry and libraries

Outsell, Inc. director and lead analyst Hugh Logue, notes that the EU is not alone is its concern about AI technologies ("EU Proposes New Regulations on Artificial Intelligence", 6 May 2021). The U.S. government and several individual U.S. states are also passing laws about consumer protection and restricting or banning facial recognition. OECD member nations and six non-member nations endorsed an intergovernmental standard on AI. Logue points out some difficulties with regulating AI. He writes, "When regulating a moving target such as AI technology, it is important that regulations be agile and able to be revised quickly to respond to new threats." He worries that information industry companies could become "collateral damage" and recommends that AI products under development assure they have no attributes that could be construed as risky under the EU regulations.

One concern of librarians is how AI and related technologies, such as Machine Learning, affects library work. An IFLA Statement on Libraries and Artificial Intelligence  ifla.org/publications/node/193397  published in October 2020 recognized the "deeply transformative capabilities" of AI. Without much oversight or thought, they are being incorporated into everyday life. Libraries can play a role in educating users about AI and should have a strong voice in supporting ethical AI research. Among the recommendations in the IFLA statement are to include text and data mining exceptions in copyright frameworks, ensure regulation of AI products protects privacy and equity principles, act as forums to exchange best practices on ethical use of AI technologies in libraries, and promote digital literacies.

Even before the 2020 statement, IFLA had raised concerns about privacy and transparency related to technology. Its 2013 Trends report raised warnings about technology and privacy—even before the world had heard about deep fakes! It also issued concerns about results from search engines could actually be trusted, given that they are driven by algorithms derived by commercial entities. How will algorithms determining relevance, not only in web search engines, which is what is most probably on the minds of regulators, but also in library subscription databases, particularly given EBSCO's tie-in with Expert.ai, be considered in the range of risks? High, limited, or minimal? That will be interesting to watch.

The EU regulations are only proposed, not yet encoded into the legal system. It is important for librarians, library associations, publishers, database producers, search platform developers, and other interested parties in the information industry to pay attention to these proposals and help shape the outcome.

 https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682