This website uses cookies and third-party content. Please accept the use of these cookies and third-party content to be able to make the most of the website and its offers. Further information can be found on our Terms and conditions and Privacy & cookies policy web pages.
Evaluating the impacts of science on policy and practice is inherently challenging. Impacts can take a variety of forms, occur over protracted timeframes and often involve subtle and hard-to-track changes. In response to these challenges, there has been an increased effort among academics and practitioners across disciplines to develop more useful frameworks to guide the evaluation of impacts at the intersection of science, policy, and practice.
These frameworks have emerged from different domains and disciplines, are framed and described using complementary but often different terminology, and approach evaluation from different founding assumptions. In this rapidly developing field, it can be hard to make sense of them, and to know what works in what contexts and why.
In a recent paper we sought to help overcome this challenge by undertaking a synthesis of the frameworks that are currently available for guiding the evaluation of impacts at the interface of environmental science and policy, examining core assumptions, similarities and differences.
We found that the differences in evaluation frameworks can often be traced back to the underlying epistemology of a specific research project. Our review surfaced a spectrum of understandings of knowledge, ranging from more positivistic epistemologies, where knowledge is certain, fixed, and able to be passed along through the push and pull of knowledge needs and supply, to more constructivist epistemologies, where knowledge is conceptualized as always mediated through culture, worldviews, and co-created through various subjectivities. Constructivist definitions of knowledge discuss how sustainability science is primarily not about delivering new information to decision-makers, rather, it is about opening up and reframing problems and possibilities. In this understanding, work at the science policy interface is not aiming to provide lacking information or knowledge, but to co-create new understandings. These various understandings of knowledge then shape what is meant by impact.
From our analysis of this diverse literature, we identify four ‘rules of thumb’ to help guide the selection of evaluation frameworks for application within a specific context.
Four ‘Rules of Thumb’ for selecting evaluation frameworks
Be clear about underlying assumptions of knowledge production and definitions of impact:Clarifying from the start how research activities are intended to achieve impact is an important pre-cursor to designing an evaluation. Furthermore, defining what you mean by impact is an important first step in selecting indicators to know if you’ve achieved it. For example, a research organization should be clear up front whether changes in attitude, problem framing, or relationships count as impact. This involves clarifying why certain activities are expected to contribute to impact will help evaluation design, and in doing so, being open to a wide range of types of impacts. For example, if it is assumed that interactions between stakeholders lead to improved relationships, indicators can usefully be developed to evaluate the nature, frequency, quality etc. of interactions. This epistemological clarity helps define what counts as impact, and what counts as robust evidence of that impact.
Attempt to measure intermediate and process-related impacts:Whether this means expanding the definition of impact, or evaluating quality, or ‘contribution to impact,’ select indicators that capture nuanced changes in problem framing, understanding, or mind sets. Our review shows that evaluations should at least partially attempt to capture the ‘below the tip of the iceberg’ knowledge co-production activities. This could be done by focusing at least part of an evaluation on measuring perspectives of participants (via interview or survey) regarding changes such as increased capacity, changes in expertise and knowledge, and shifts in how a problem is understood or framed. Attention to such intermediate impacts is important as they may serve as building blocks for end-of-process outcomes, and also enable the evaluation of ‘progress makers’ along a theory of change to identify if a project is tracking towards intended outcomes.
Balance emergent and expected outcomes: While it is important to be clear on expectations and aspirations, evaluations should have at least some open-ended component which captures unexpected outcomes, both positive and negative. This could be implemented through crafting at least part of an evaluation in an open-ended manner. For example, rather than rubrics with pre-determined criteria, ask instead- what changed? who changed? how do you know? Such an open-ended approach allows for unexpected outcomes to surface.
Balance indicators that capture nuance and those that simplify:Evaluations which assign numerical scores to impact may be extremely useful for project managers and large research organizations. However, aggregated scores can sometimes overshadow conceptual changes in the way a problem is framed, or subtle changes resulting from knowledge co-production. Over emphasis on simple evaluations can also lead to ‘gaming the indicators,’ and provide perverse incentives to tailor research to meet the indicators. While indicators that can be quantitatively scored (for a hypothetical example, assigning 1-10 scores on dimensions like suitable context, legitimacy and relevancy, project outputs) may be easy to use, especially for comparing different research projects, such an approach might not register whyor howchanges occurred. The same is true for the number of indicators- fewer indicators may make evaluation simpler and more convenient, where more indicators may deliver more detailed information. This tension must be considered when designing an evaluation.
Elena Louder is a PhD student in the department of Geography, Development and Environment at the University of Arizona, Tucson, USA. Her research interests include political ecology, the politics of renewable energy development, knowledge co-production, and biodiversity conservation.
Carina Wyborn is an interdisciplinary social scientist with background in science and technology studies, and human ecology. She works on the science and politics of environmental futures at the Institute for Water Futures, Australian National University (ANU).
Chris Cvitanovic is a transdisciplinary marine scientist working to improve the relationship between science, policy and practice at the Australian National University (ANU)
Angela Bednarek is Project Director of Environmental Science at Pew Charitable Trusts. She develops strategy to develop, support, and communicate scientific research to explain emerging issues, inform policy, and advance solutions to conservation problems.
Don’t miss out!
Join our online community to receive news on latest thinking, research, events, trainings, and our regular newsletter.