Bibliometrics is the quantitative analysis of research literature, based upon citations, and can be used to evaluate the impact on the academic community of a research paper, an individual researcher, a research group or institution, or a journal.
The most commonly used bibliometric indicators include journal metrics and researcher metrics and alternative metrics are now increasingly being used.
For further information on understanding your research metrics see the MyRI (Measure your Research Impact) tutorial, a free online bibliometrics tutorial managed and maintained by three Irish Libraries: Dublin City University; Maynooth University and University College Dublin.
- The Impact Factor is probably the most widely recognised journal metric. The citation data is taken from the Web of Science database and published by Thomson Reuters as the Journal Citation Reports (JCR). There are separate editions for Science and for Social Sciences. JCR enables you to find information on leading journal titles in a subject field and compare the impact of individual titles. There is a brief explanation of the Thomson Reuters Impact Factor, including how to use it wisely, here.
- The Eigenfactor, developed at the University of Washington and based on citation data in JCR, measures the number of times articles from a journal published in the past five years have been cited in the JCR year. It includes subject coverage in both the Sciences and Social Sciences.
- SCImago Journal & Country Rank (SJR) published by Elsevier is based on Scopus data from 1996 and covers some Arts and Humanities subjects, unlike JCR. See more about SJR here.
- Source Normalized Impact per Paper (SNIP) also published by Elsevier and based on citation data in Scopus measures contextual citation impact by weighting citations based on the total number of citations in a subject field. So disciplines with smaller publication rates can be compared to ones with higher rates. It's defined as the ratio of a journals citation count per paper and the citation potential (average length of lists of reference lists in a field) for the journals subject field. See more about SNIP here.
- The h-index (also known as the Hirsch index) was proposed in 2005 by Jorge E. Hirsch as a way to characterise the scientific output of a researcher. It attempts to measure both the productivity and impact of a researcher and is based on their most highly cited papers.
A researcher has index h if h of his or her Np papers have at least h citations each and the other (Np – h) papers have ≤h citations each.
(Np = number of papers)
E.g. an h-index of 20 means there are 20 published papers each with at least 20 citations.
Some of the limitations of the h-index include:
- A highly-cited paper is counted regardless of why it’s being referenced e.g. for negative reasons.
- Variations in average number of publications and citations in various fields (some traditionally publish and cite less than others) are not accounted for.
- The number and position of authors on a paper are ignored.
- It limits authors by the total number of publications, so shorter careers are at a disadvantage
There will also be variations in the h-index according to the bibliographic source or the search engine used to calculate it as the sources will only gather information from the journals they index. For example Web of Science indexes over 12,000 journals and conference proceedings and Scopus indexes over 20,000 journals and conference papers, and citation data is primarily from 1996 onwards.
- Citation analysis - find out how many times your publications have been cited on journal and citation indexes such as on Scopus and Web of Science.
Google Scholar Citations is a free service provided by Google which collates your work into one profile. The profile can be kept private or made public. The benefits of using Google Scholar Citations include a simple graphic to identify what your h-index is.
Publish or Perish retrieves and analyses academic citations to measure performance of individuals. It uses Google Scholar raw citations, then analyses and presents results according to indicators such as the h-index.
Citation counts alone should not be used as the only measure of research quality. Different sources can produce different figures, and data should be used with caution, especially if comparing across disciplines.
 Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572. http://doi.org/10.1073/pnas.0507655102
- Alternative metrics or altmetrics are an alternative to more traditional impact metrics with an emphasis of measuring the individual article rather than the journal in which it is published. They count data such as article views, downloads, mentions in social media and news items. These metrics are generated by a variety of audiences including those not in academia, and are considered to be representative of the level of social engagement activity based on a work. One advantage of altmetrics is speed. They are more immediate than traditional metrics using public APIs to gather data in days or weeks, rather than years.
Some of the limitations of altmetrics include:
- The score on each field does not directly tell you anything about the quality or impact of the paper. Some papers with high scores may simply be controversial and therefore have been discussed more than other papers.
- More recent papers may have an advantage because of the uptake of social media in recent years and because papers are generally mentioned more at the time of publication.
Some alternative metric tools:
- Altmetric.com maintain a cluster of servers that watch social media sites, newspapers, government policy documents and other sources for mentions of scholarly articles. They bring all the attention together to compile article level metrics. They offer free services for individual researchers including a free bookmarklet that can be added to your bookmarks toolbar and used to get altmetrics on articles.
- Impact Story is an open-source, web-based tool that helps scientists explore and share the diverse impacts of all of their research —from journal articles, to blog posts, datasets, and software.
Altmetrics serve as a supplement to traditional metrics. It is always a good idea to use a range of metrics appropriate to your discipline and relevant to the context in which they are being used.