Bibliometrics, higher education, and information professionals

Writing about university research rankings may not be everyone's cup of tea, but during the pandemic, it fit international librarian Ruth Pagell to a T. Rankings are relevant to librarians because almost all of them include at least one indicator based on bibliometrics from Clarivate or Elsevier. What a great opportunity for librarians to show off their expertise with different metrics!


I’m sure I surprised people when I told them that, during Covid restrictions, I spent my time writing about university research rankings. They rolled their eyes and changed the topic. I currently live in Hawaii and I understand that the people in Hawaii are happy their university is ranked at all. They are not overly concerned that Cambridge is now ranked higher than Oxford in the recent QS rankings. It was big news in the United Kingdom.

I became an expert in university rankings by default, based on years of writing articles about rankings, the first one, with co-author Michael Halperin, in the April 1987 issue of Database (“Company Rankings: Whose Top 20?” pp. 29-34). I also had years of experience using what are now Clarivate (aka ISI, Thomson, and Thomson Reuters depending on the year) products in the U.S. and Singapore.

 In 2008, when the professor Clarivate usually sponsored to speak at the Consortium on Core Electronic Resources in Taiwan (CONCERT) could not attend, I was asked to speak in his place. The topic was bibliometrics and rankings. I also met with HEEACT (Higher Education Evaluation and Accreditation Council of Taiwan), which had just initiated its own ranking, known as National Taiwan University Ranking since 2012).

Ruth’s Rankings

Six years later I began “Ruth’s Rankings”, with a narrow focus on the bibliometrics used by the leading ranking organizations with a target audience of Asia-Pacific in the Access newsletter. 

My editor thought there would be fewer than ten articles in the series. He was so wrong: There have been over 50 articles and another 50 updates. The focus has broadened to include a variety of ranking sources. I’ve covered specific Asia/Pacific countries, journal metrics and predatory journals, and geopolitics. The most recent two articles are on gender, featuring women as authors and women’s universities. The third in the series will be gender identity.

“Ruth’s Rankings” usually begin with questions, such as “What do the University of Oxford, Western Sydney University, and the Medical University of Bialystok have in common?

Answer: The three universities are all ranked number one in a 2022 ranking: Oxford in THE world rankings 2022, Western Sydney in THE Impact, and Bialystok in CWTS Leiden’s Gender rankings where gender has to be selected as a type of indicator.

There are four things I try to accomplish:

1 – Answering the question of who is number one

2 – Understanding the strengths and weaknesses of the methodology

3 – Finding new metrics that may showcase universities that will never be in the top 100 but are serving their target audiences

4 – Examining the rankings within the broader landscape of higher education

Global university rankings are relevant to librarians and our information partners because almost all of the rankings include at least one indicator that is based on bibliometrics from Clarivate or Elsevier. In universities, the libraries are usually the home for bibliometric sources and hopefully, librarians are knowledgeable in the different metrics.

Who does rankings?

IREG Observatory on Academic ranking and excellence provides a list of rankings that meet their criteria. My articles have covered all of them. Three rankings are the most influential. The first global ranking was in 2003, now called ShanghaiRanking’s Academic Rankings of World Universities (ARWU), using Clarivate data. The first ranking had 500 universities and now has 1,000. Harvard was number one in 2003 and is number one in 2022. A year later Times Higher Education  (THE) and Quacquarelli Symonds Ltd (QS) issued a joint ranking of 200 universities. They split in 2010 but both use a similar mix of reputation, internationalisation, and Elsevier citation metrics. In last year’s release, THE had 1661 ranked universities and another 451 called reporters, who submitted data but did not meet the ranking criteria. QS’s release this year has 1462.

When writing this article for ILI365 eNews, I was already in the middle of my summer update. I have included a sample table with four rankers who released rankings over the past few months. ARWU and QS provide composite scores, using different sets of metrics. ARWU uses bibliometrics and also Nobel and other prize winners, which account for 30% of the rankings. 50% of QS’ rankings are based on academic and employer reputation surveys. CWTS Leiden, based on Web of Science bibliometrics, is the only ranking that provides access to the underlying data. Nature Index uses 82 science journals that were selected based on their reputation by a panel of active scientists. Universities are ranked on their output of primary research articles.

Understanding the methodology

I try to emphasize the importance of reading the methodology. I prefer rankings that give users the ability to re-rank on a metric that is most important to their universities. I put little value on a ranking that just gives a ranking and not even a score. Emory University to Clarivate’s InCites and through Elsevier to SciVal allows me to play with the data myself. Having been born a sceptic, I never believe the news releases put out by the rankers.

The rankings are overemphasized and misinterpreted. Despite many complaints, they are here to stay. In today’s current environment, organizations are looking beyond the standard rankings and the SDG rankings toward equality and diversity. The question should not be which is right or wrong, but which best fits the needs of the different user groups.

DRIVERS for Bibliometrics and Rankings

  • Research publications and citations (Clarivate and Elsevier)
  • Faculty comparisons—hiring, promotion, tenure
  • Accountability—Faculty to Institution/ Institution to funding bodies
  • Institutional benchmarking
  • National policy Initiatives

COMMERCIAL Aggregators used by rankers

  • Clarivate
    • Web of Science (WOS), InCites, Essential Science IndicatorsESI
    • Journal Citation Reports (JCR);
  • Elsevier
    • Scopus, SciVal
  • OTHER THE AGGREGATORS
  • Dimensions, Google Scholar
  •  

Players and stakeholders

  • Authors, researchers, bibliometricians, scientometricians
  • Institutions
  • Publishers
  • Rankers
  • Information Professionals
  • Governments, Quality Assurance agencies
  • Students and parents
  • Employers
  •  

Factors impacting rankings

  • Institutional size and age; institution name disambiguation; no standardized list of how institutions’ names are displayed; system or individual units
  • Research disciplines
  • Publication types—articles, reviews, books, proceedings; total output, output per faculty or per paper, articles in top N%
  • Geographic coverage
  • Author name disambiguation; matching authors to publications, institutions, and countries; Elsevier does a better job than Clarivate
  • Full count or fractional count
  • Manipulation of data and rankings interfaces
  •  

Ranking options and filters

  • Geography—Asia, Oceania, Middle East, individual country
  • Subject/field—Five broad fields to over 200 individual disciplines
  • Time—Ability to view earlier rankings
  • Re-ranking by indicator
  • Ranking and score

Click here for a complete list of "Ruth’s Rankings". "Ruth’s Rankings" is a product of LibraryLearningSpace an iGroup (Asia Pacific) website and the home of ACCESS: Asia’s Newspaper. Ruth A. Pagell is Founding Librarian, Singapore Management University, and Emeritus Faculty Librarian, Emory University, Atlanta, GA. She has held  adjunct faculty status at the Univeristy of Hawaii LIS Department, Drexel University, Nanyang Technological University, and the Wharton School.