Skip to Main Content

Measuring Research Impact

Using Metrics Responsibly

Guidance on the responsible use of quantitative indicators in responsible assessment

Research assessment is an important and challenging task and many institutions work hard to grapple with its complexities. Nevertheless, the tendency to fall back on quantitative indicators (or metrics) that are often assumed to provide a measure of objectivity remains widespread. While indicators have great utility in the fields of bibliometrics and scientometrics (e.g., tracking the growth or decline of different subfields), they are inherently reductive so their use in the assessment of individual researchers or research projects requires careful contextualization.

The Declaration on Research Assessment (DORA) is best known for being critical of the misuse of the Journal Impact Factor (JIF) in research evaluation. As a result, DORA is often asked for its views on other indicators.

In this briefing note we therefore aim to explain how the principles underlying DORA apply to other quantitative indicators that are sometimes used in the evaluation of research and researchers. 

Using Metrics Responsibly

Coalition for Advancing Research Assessment (COARA)

The process of drafting an Agreement on reforming research assessment was initiated in January 2022. More than 350 organisations from over 40 countries were involved. Organisations involved included public and private research funders, universities, research centres, institutes and infrastructures, associations and alliances thereof, national and regional authorities, accreditation and evaluation agencies, learned societies and associations of researchers, and other relevant organisations, representing a broad diversity of views and perspectives.

 

Using SciVal Responsibly

Responsible Metics

Using metrics responsibly

"Some of the most precious qualities of academic culture resist simple quantification, and individual indicators can struggle to do justice to the richness and plurality of our research" - responsiblemetrics.org/about/

Limitations of metrics

Metrics can be a useful tool to help track the attention received by research outputs. Citations and online attention are relatively easy to record and measure, and provide a reasonably quick and simple way to compare research.

However, Metrics on their own are not sufficient to assess research fairly. Research can impact on the world in any number of ways, many of which are difficult to measure or quantify, and metrics are only part of the picture.

A controversial or fraudulent paper might receive a high amount of negative citations. Albert Einstein's h-index is much lower than many contemporary researchers, but that doesn’t make him a bad scientist. Metrics can also reflect bias within the scholarly community - for example female researchers receive fewer citations on average than men. You should therefore exercise caution when using metrics.

Golden rules

  • What question are you trying to answer? Is the metric you are using appropriate? What aspect of research performance do you want to explore? Why? Can this be measured, and if so how? Find out what each metric can tell you, and what it can't. If you’re using a metric as a proxy for a something that is not directly measureable, as a minimum you should be explicit about this in your analyses.
  • Always use quantitative metric-based input alongside qualitative opinion-based input. Like all statistics, metrics can be misleading without context. Metrics can be a useful tool, but they are no replacement for expert opinion.
  • Get the big picture. Each metrics tool takes its data from different sources, and calculates its metrics in different ways. Ensure that the quantitative, metrics part of your assessment always relies on at least two metrics to reduce bias. Using only a single measure may also encourage people to change their behaviour to game that particular measure.

Adapted from Library Connect Quick Reference Cards for Research Impact Metrics

Good practices

The Metric Tide

The Metric Tide review, commissioned by HEFCE to examine the role of metrics in research assessment and management, identified five dimensions of responsible metrics. For more information check out the Responsible metric blog & Responsible metrics forum