To understand evidence use, understand the goals of decision makers

In this re-post from Evidence & Policy, Justin Parkhurst introduces the 'programmatic approach' to understanding evidence use.

17 . 06 . 2021

What does it mean to use evidence in policymaking? This seemingly simple question has been remarkably under-defined in all the calls for increased use of evidence. Indeed, many of those who champion ‘evidence-based policymaking’ do little to explain what it means for a policy to be evidence-based, and have trouble explaining what evidence use actually means when decision makers have multiple competing goals and social concerns. Evidence is simply seen as a good thing – and more use is better – without really considering what that means or what happens when there is disagreement around which evidence to use for what goals.

Policy scholars who study evidence, on the other hand, have approached the issue from the perspective that ‘evidence use’ can mean any number of things within a policy setting. The literature can, therefore, appear divided into two extremes: either evidence use is taken for granted to be a known (assumed to be good) thing, with little consideration of political realities, or alternatively it is seen as multidimensional, the form of which is constructed by the nature of policy ideas, processes, and interactions.

Ultimately, this makes it difficult to discuss two important questions: which evidence is appropriate to particular policy decisions, and which ways of using evidence are most relevant to a policy environment.

We develop what is termed a ‘programmatic approach’ to evidence use to help get beyond this impasse. This approach essentially argues that evidence use should be understood in relation to the goals pursued by policy actors. It therefore begins by asking policy decision makers about their specific programmatic goals and associated tasks. From there, it is then possible to explore the following aspects of evidence use in relation to programme goals:

  • the forms of evidence – representing the types of data or information needed for the task;
  • the sources of evidence – representing judgements on who would be the most useful providers of evidence;
  • the features of evidence – representing aspects of evidence that help it achieve the task at hand or make it more useful to that task; and
  • the targets of evidence – representing any stakeholders to whom provision of evidence would be important to achieving the task.

We developed this approach by analysing the use of data and evidence within national malaria control programmes in Africa. While this disease is quite specific to certain countries, the features of a technical bureaucratic agency working towards a social goal with a specific government mandate is in many ways typical of a wide range of bureaucratic bodies enacting public policy.

The programmatic approach developed allows us to recognise that evidence use occurs in service to other goals, it is not some perfect or external concept that decision makers themselves strive to achieve.

Our malaria specific findings identifies how key tasks of these agencies all provide specific incentives and logics to use evidence in different ways. Such tasks included the need to: advocate for funding, allocate funding, develop standards and guidelines, and identify gaps in knowledge. Each of these tasks shaped in different ways what might be considered appropriate forms and applications of evidence – or ultimately what it meant to ‘use’ evidence.

The programmatic approach developed allows us to recognise that evidence use occurs in service to other goals, it is not some perfect or external concept that decision makers themselves strive to achieve. This recognition allows us to understand when decision makers use evidence in ways that we might ourselves not do, or which we might disagree with, as it brings to the fore how evidence is used in service to specific goals.


This also allows for moral debates about whether a particular use of evidence is actually a good thing or not from a social perspective. While most people agree that malaria control is important, decision makers will regularly use evidence to achieve self-serving or even corrupt goals. Politicians cherry picking of data to help their re-election chances can be understood as a specific ‘use’ of evidence driven by their goals. The debate can then move away from shouting that they are not ‘evidence based’ to a more useful debate about whether that use of evidence, for that specific goal, is a good thing, or whether there should be repercussions for doing so, or systems to prevent it. This research is being published during the global COVID-19 pandemic where we are seeing accusations of misuse of evidence within political arenas in multiple countries. Instead of simply yelling that science is being ignored, the concepts developed here can help us to think about why evidence is being used in different ways, and when it is used correctly or incorrectly. Does manipulation or rejection of COVID evidence serve particular political goals? If so, this should be expected – and if expected, we can be better prepared for it as well.


Justin Parkhurst
is Chair of the Editorial Board of Evidence & Policy and Associate Professor of Global Health Policy at the

London School of Economics and Political Science (the LSE), UK.

This blog was originally published in Evidence & Policy here. You can also read the original research below:

Parkhurst, J. Ghilardi, L. Webster, J. Hoyt, J. Hill, J. and Lynch, C. A. (2020) Understanding evidence use from a programmatic perspective: conceptual development and empirical insights from national malaria control programmes, Evidence & Policy, DOI: 10.1332/174426420X15967828803210. [Open Access]

Don’t miss out!

Join our online community to receive news on latest thinking, research, events, trainings, and our regular newsletter.