Written by:
Deelan Maru
David Halpern
Drawing on a new report, Deelan Maru & David Halpern outline the extent of under-evidenced public policy interventions and suggest how collaborative international approaches to building evidence architecture for policymaking could bridge this gap.
Governments spend trillions. Yet very little of this expenditure, or the programmes and practises it funds, is based on robust evidence. In our recent report, we sought not only to quantify this gap, but also to show how governments could address it. In particular, how they might collaborate to build and share the evidence they have.
The core issue is not new, and can’t just be pinned on politicians. There has been a systematic failure over decades around the generation and translation of policy-relevant research. R&D spending by governments in areas outside of health and defence is incredibly low, often averaging less than 0.5% of expenditure in the area. The implied research gap across the U.S., UK, Australia and Canada, is c.$100 billion per annum.
Alongside low spend, governments are not evaluating as often as they should or as robustly. A 2019 report found that only 8% of UK government spending on major projects - £35 billion of £432 billion - had a robust evaluation plan in place.
Conducting more primary research is evidently a key concern that governments should act on immediately. Collating available evidence is equally important. But more vital is the need to ensure that good quality evidence is put into the hands of policymakers and presented in a way that is easy for them to quickly digest the most appropriate action. Of course, these recommendations only address the supply side. Without sufficient demand for evidence, academics and policymakers will continue to be misaligned.
So, how do we fix these gaps in primary and secondary research production, and encourage adoption by policymakers?
In addition to governments simply increasing national R&D spend, a smart play would be for countries to collaborate to improve the evidence architecture. There are overlapping common interests and questions (e.g. how best to screen cancer, the best way to teach a child to read and write, and how to reduce recidivism) that apply across borders. Of course, caution needs to be taken when translating results, but surely an intervention that has worked in one country is a better starting point than nothing?
To increase the number of evaluations being conducted across countries, a shared international evaluation fund should be established. From this fund, evaluation ‘paratroopers’ could be deployed across the world to evaluate promising new interventions. It could fund responsive and proactive evaluations. In responsive mode, policymakers and practitioners could apply to receive financial and technical support for a robust experimental or quasi-experimental evaluation. In proactive mode, fund managers would actively identify innovative programs, and offer funding and support to get evaluations in place.
Alongside the fund, efforts to standardise reporting and publication protocols can help unlock further collation and comparison of evidence. To synthesise the available evidence, you first need to understand what evidence is out there. Here, evidence gap maps can play an important role.
Crucially, synthesis needs transforming. A promising avenue for this are living evidence reviews - systematic reviews that are continuously updated. A living evidence review for every area of social policy would allow governments to rapidly assess what works in an area and adapt public services based on the latest evidence.
Living evidence reviews are gaining traction. The Economic and Social Research Council in the UK has just announced £11.5m to fund improvements to evidence synthesis using AI. Wellcome has also announced £45m to develop new data and tools for accelerating living evidence synthesis.
Another consistent finding from policymakers around the world is that research often fails to answer the question they have. Understanding whether ‘scared straight’ programmes work can be useful. But, policymakers also want to know what the best alternative interventions are. To do this evidence synthesis needs to operate at a higher level. Meta-Living Evidence Reviews, collating all evaluation evidence within a specific policy area is feasible, but would require significant investment. The Education Endowment Foundation toolkit is akin to this, providing an overarching summary of evidence in education that is continuously updated. A version of this toolkit across every policy area, at the COFOG 2 Level, would allow policymakers to easily access the best evidence on a topic and make more informed decisions.
From a UK perspective, surely one of the most impactful ways we can make progress on the new government’s 5 Missions, would be to assemble the evidence we have, and don’t have, for each Mission. Evidence gap maps and meta-living evidence reviews should be commissioned for each of them as soon as possible.
Boosting evidence adoption is laborious and requires organisational change. In the short term we should enable more cross-national sharing of knowledge. Medicine is particularly good at this, with societies, journals and mechanisms to accelerate the diffusion of best practice. We have some examples in other areas of policy, such as the Evidence for Education Network or the Society for Evidence Based Policing, but we could be much more systematic in collaboratively convening these networks and provide them with the proper funding to become self-sustaining.
Finally, we can take steps to get better at understanding the impact of research, i.e. building the evidence base for research itself (sometimes known as ‘metascience’). Further research can help us understand, for example: the returns to changing research expenditure; where best to direct funding; and how best to use AI in research. It can also help advance diffusion of evidence, e.g. what is the right level of human-centred dissemination versus toolkits and dashboards. A basic review on the efficacy of variations in approaches to applied research, translation and adoption should be undertaken.
There are nascent efforts to improve the global evidence ecosystem, and funding such as that put forward by the ESRC, will start to create transformational change. More can always be done. Equity is important. We have started with a coalition of the willing, largely anglophone high-income countries. We need to ensure that others are brought into the fold. The UN SDG Synthesis Coalition is a starting point. Variance is our friend. We see three broad pathways through which this international approach might be advanced - through loose collaboration, existing and novel institutions. While existing institutions should be utilised wherever possible, there may be a role for novel governance layers or institutions to plug gaps. The more countries that collaborate, the more we can identify novel interventions that could translate to other contexts.
Our report, available here, sets out the arguments above in detail. It is gratifying within weeks of its publication, c.$70m has already been committed to funding living evidence reviews. These are important developments for the better use of evidence in government, and for research translation and impact more generally. We hope it is the starting point for significant change in embedding evidence-based policymaking within governments.
Deelan Maru is Senior Policy Advisor to the Chief of Innovation & Partnerships at the Behavioural Insights Team.
David Halpern CBE is President Emeritus and Founder at the Behavioural Insights Team.
A version of this post is also being published with the LSE Impact Blog.