On the pathway to improved impact assessment

We are on the right track to understanding how to assess impact but the task continues to be imperfect, messy and burdensome

09 . 04 . 2020

Research impact assessment can prompt highly polarised opinions. Some see it as helping research to be politically engaged and focused on social justice. Others fear that it represents another tightening of a neo-liberal, managerial, audit culture that they see as infesting universities.

It seems fitting to consider this topic now, as we bear down on increasingly complex local and global challenges; COVID-19, recessions and environmental crisis. We need to understand whether and how research can make a difference, not only to justify public spending, but also to realise its transformative potential in society.

The front line reality of accessing this potential means that researchers, institutions, and research partners increasingly need resources and support for impact assessment. Two timely contributions to understanding best practices highlight the state of learning. Each recognises the importance of case studies, while discussing their lack of objectivity. Taken together, they tell us that national impact assessment exercises are worthwhile.

Budtz Pedersen, Grønvad, and Hvidtfeldt undertake the weighty task of reviewing how the impact agenda has taken shape in social sciences and humanities disciplines. They centre their analysis on several research assessment frameworks and models that have emerged from the ongoing evidence revolution, and the multiple and diverse ways for measuring impact. By cross-examining these lines of evidence, they set up five signposts to provide direction within the modern impact assessment landscape:

  1. Research impact assessment is a dynamic, cyclical, and complex process.
  2. Mixed methods impact assessments are being used to minimise the respective limitations of quantitative or qualitative methods alone.
  3. Impact assessment does not only occur after research projects have been conducted, but also before projects have begun and during the research process.
  4. Case-based impact assessments offer a useful alternative to metric-based assessments, particularly for capturing “what is significant and interesting in real-world cases” (p. 15), but best practices likely combine impact case studies with metrics and indicators.
  5. Impact is not value neutral—it can be positive or negative depending on the circumstances of specific research projects.

Meanwhile, Edwards and Meagher present an impact evaluation framework that boils down to three questions: what changed, why, and so what? Using 12 case studies of research projects associated with the UK agency Forest Research, they show how these modest questions provide “a means to transform informal deliberations about impact generation into a process that considers the full range of impact types and causal factors, in a format that supports internal learning and external communication”. The detailed framework behind their questions provides the building blocks for credible and compelling impact narratives.

The value of case studies

Viewing the articles together, we see that case studies or narrative approaches have become a core method in impact assessment, owing to their ability to capture diverse types of impacts using a variety of complementary methods. But constructing powerful impact narratives is not a trivial exercise, with guidelines like those featured in the UK’s Research Excellence Framework leaving a considerable amount to interpretation. On this issue Edwards and Meagher’s framework offers pragmatic guidance. By reflecting on and answering their questions, impact narratives can be nuanced and information rich without succumbing to the constraints of linear models for templating impact.

Objectivity is hard to achieve

In response to Budtz Pedersen and colleagues’ criticism that impact case studies lack objectivity, it is time to recognise that no impact narrative will ever be, as Edwards and Meagher observed, “a complete and objective representation of all causes and consequences of a project.” Impact is always “uncertain, iterative, contingent, and highly social” as well as wrapped up in interests and values. While the attribution issues of research impact need to be taken seriously, the generic suite of causal factors forwarded by Edwards and Meagher encourage acceptance that objectivity in impact assessment is an ideal rather than a tangible.

We are on the right path

Finally, there is still a great deal we don’t know about the nature and breadth of impacts. National impact assessment exercises, for all the consternation they rouse, offer major opportunities to learn about how impact can be supported across all levels of research systems. At the same time, we cannot learn from stories untold. Progressing research impact needs to begin with more coordinated efforts to share lessons being learned, while staying aware of how burdensome the development of impact case studies can be. These pieces offer some assurance that we are on the right path.

Stephen MacGregor is a PhD candidate at Queen's University, Canada.

Don’t miss out!

Join our online community to receive news on latest thinking, research, events, trainings, and our regular newsletter.