What’s happened to the relationship between evidence and policy over the last thirty years?
- Rethinking Policy Impact
- Publication Date
- 15/06/2022
- Author(s)
- Professor Sir Geoff Mulgan
Governments have always used evidence of some kinds, and not just for war and espionage. Traditions of systematic use of evidence for domestic policy can be dated back 2000 years in China, and to the 18th century in the UK and elsewhere in Europe.

The 19th century brought a surge of interest in statistics, and the rise of professions and social policy, paving the way for the more recent waves of interest were partly inspired by the rise of evidence-based medicine in the 1970s. So, for example, in the late 1990s and 2000s much was invested in evaluations and use of experiments including randomized controlled trials (RCT), in fields such as welfare. In the late 1990s, NICE started to provide evidence-based guidance for health commissioning, sitting alongside the worldwide Cochrane (and later Campbell) Collaborations. In the 2010s a group of ‘what works’ centres were set up, partly inspired by NICE, including the Education Endowment Foundation (EEF), the Early Intervention Foundation (EIF) and centres for children’s social care, ageing, wellbeing and policing.[i]
These thrived in the relatively un-ideological conditions of the 1990s and 2000s – and similar approaches were also encouraged by global bodies like the OECD and World Bank that were relatively insulated from politics.[ii] Conversely, such approaches tend to wane when more ideological conditions return (though paradoxically the US Foundations for Evidence Based Policymaking Act was passed in 2018 at the peak of the Trump Presidency) or in conditions of acute stress.
While the cumulation of evidence continues, there has been a steady shift in approach away from a focus on supply (with repositories and collections of various kinds) towards a greater focus on the conditions of demand and use, with more emphasis on demand-led approaches, conversation and engagement, rather than reliance on text and big evaluations. This approach has guided the work of IPPO – the International Public Policy Observatory – which I help lead – and which has tried to combine rigorous and comprehensive evidence synthesis with active engagement with decision-makers, using conversation and roundtables as much as written reports as a primary means to convey evidence to busy decision-makers.
In what follows I make six observations on what might have been learned about the conditions for effective use of evidence.

This blog is part of the research project Rethinking Policy Impact
A UK-wide conversation on the principles, goals and approaches that should guide the policy impact agenda in higher education.
1. Many types of knowledge are relevant to decisions – and there is no obvious hierarchy
Science prioritises knowledge that has been verified and tested through rigorous processes of experiment and peer review, and that is therefore, at least in principle, generalisable. But it doesn’t necessarily follow that apparently scientific evidence is superior to other forms of knowledge in terms of its usefulness for policy. What works in one context may not work in another, with a different culture, social structure, institutions and capabilities (and there have been many failed attempts to transplant policies and interventions, for example from the US to the UK).
Indeed, the complexity of relevant knowledge is a first, and important lesson from decades of attempts to base policy more on evidence, and from recent experiences such as the COVID-19 pandemic. In practice policy-makers have to draw on multiples kinds of knowledge, including:
- Statistical knowledge (for example of unemployment rises in the crisis)
- Policy knowledge (for example, on what works in stimulus packages)
- Scientific knowledge (for example, antibody testing)
- Professional knowledge (for example, on treatment options)
- Public opinion (for example quantitative poll data and qualitative data)
- Practitioner views and insights (for example, police experience in handling breaches of the new rules)
- Political knowledge (for example, on when parliament might revolt)
- Economic knowledge (for example, on which sectors are likely to contract most)
- ‘Classic’ intelligence (for example on how global organised crime might be exploiting the crisis)
- Ethical knowledge about what’s right (eg on vaccinating children who may have relatively little risk from a disease)
- Technical and engineering knowledge (eg on how to design an effective tracing system or build a new high speed rail line)
- Futures knowledge (foresight, simulations and scenarios, for example about the recovery of city centres)
- Knowledge from lived experience (the testimony and experiences of citizens, usually shared as stories, for example about experiences of the pandemic)
Different parts of academia can contribute to different parts of this list. The crucial point is that there is no metatheory to tell which you should pay most attention to at which time. So, for example, faced with an epidemic, it is wise to lean on the scientists. But they can’t tell you whether it will turn out to be socially acceptable to ban human contact, close the schools or arrest people for leaving exclusion zones, and in many cases the different types of knowledge will point in conflicting directions.
2. Synthesis is vital – but often missing
Action is usually simpler than analysis. So there is little point in having strong inputs of evidence without capacities for synthesis. Unfortunately, few governments appear to have good methods for doing this synthesis well. Universities too often lack either a theory or a practice of synthesis, for example to guide on how to balance priorities for net zero with priorities for jobs or welfare. This is different from evidence synthesis which usually only covers a small proportion of the relevant inputs that feed into a synthesis for action.
So any government badly needs the integrative intelligence of phronesis, or wisdom, and a space to have an open democratic dialogue about whether the judgements are right. That means being fluent in many frameworks and models and having the experience and judgement to apply the right ones, or combine them, to fit the context. It also requires skills in prioritisation and handling trade-offs, which scientists often find very difficult.
Addressing this topic should be part of the work of universities, and is particularly relevant to handling big transitions (such as net zero) which involve complex, interconnected systems combining engineering, economics, psychology and much more.
3. Digital technology helps – but only to an extent
The shift to a focus on evidence has been both helped and hindered by the ubiquity of digital platforms. On the one hand, the availability of search tools makes it far simpler to find material, and thousands of platforms aim to aggregate and curate evidence of different kinds. This makes it easier to scan for global knowledge and experience. For example in the days of the UK government Strategy Unit, each new project would begin with a global scan of evidence and best practice which would be published.
The negative is that social media have often spread misinformation and lies more easily than evidence, and that it has turned out to be much harder than many expected to create useful platforms. Most repositories go largely unused and few have exactly the knowledge a policy-maker needs. Moreover, governments have made surprisingly little progress with knowledge management, particularly across departmental boundaries (and shared memory may have deteriorated with digitisation). Overall, the externalisation of this function to independent bodies has tended to work better than internal knowledge management.
4. How to organise evidence: should it be separated from policy and practice?
This relates to the question of how to organise evidence within governments. For several decades, the answer lay in the ‘Rothschild principle’ that researchers should work on commission to clients. Later the ‘new public management’ approach encouraged the view that policy should be separated from delivery. In the late 1990s, both were rejected on the grounds that they tended to make evidence less used and less useful. Instead, for a period, experts were embedded in the same teams as the policy-makers and the practitioners, with policy and implementation seen as an end to end process. My view is that this is right – and points to the need to integrate researchers, at least for periods of time, in policy teams.
5. Types of field determine how evidence can be used
How evidence will be used depends very much on the type of field being considered. In some fields knowledge is reasonably settled. The theoretical foundations are strong, governments broadly know what works, there is a strong evidence base and most new knowledge is incremental. Research focuses on filling in the gaps, and refining insights. Pilots can be designed relatively easily to isolate the key factors. In these examples – macro and some microeconomics, labour market policy, some curative and preventive health – the field is closer to a normal science. The professional bodies and leading experts can generally be relied on to give good advice, systematic reviews can generate clear-cut conclusions, benchmarking is straightforward, and good innovations spread fairly quickly through formal networks.
Very different are fields in flux. In these fields, there is argument about what’s known and which categories or theories are relevant. People may agree that policies which once worked are no longer working, but they can’t agree on either the diagnosis or the solutions. In these areas, which include a fair amount of education, some environmental policy, crime and the organisation of public services, there is often a great deal of fertility and experimentation. Evidence is patchy and is more likely to reveal the weaknesses of policy rather than providing convincing evidence about what will work in the future. The most promising innovations are as likely to come from the margins. In these areas, other mechanisms are often needed to make use of knowledge: the collaboratives in health once used in the UK are one example, bringing together a diagonal slice of practitioners, researchers, and decision-makers to consider what works, and valuing direct experience alongside formal research evidence.
Finally, there are genuinely new areas whose very newness precludes a strong evidence base: the regulation of biotechnology or artificial intelligence, the design of carbon taxes, are all examples. In all three types of field, evidence has a critical role to play. But it will only be in the first that it is meaningful to talk of policy as based on evidence rather than informed by it.
6. Evidence can inform but not guide innovation and imagination
By definition, evidence is knowledge of the past whereas innovation and imagination have to point to the future. It is useful for innovators to be aware of the available evidence but if they are too constrained by it, they will fail to innovate. I discuss this issue in much more detail in my new book ‘Another World is Possible’.[iii]
7. Limits
Much is known about the limits of evidence and its use. First, the evidence on evidence shows very clearly that who it comes from and how it is communicated matters as much as the content. The idea that good evidence on its own is persuasive has turned out to be almost wholly wrong. For example, doctors are far more likely to believe new evidence if it comes from other doctors. Politicians are much more likely to believe evidence if its validated by the media they trust.
Second, time imposes severe limits on evidence. Governments often have to act far in advance of sufficient evidence, and not just in crises such as pandemics.
Thirdly, democracy sometimes overrides evidence. In the past, experts have often been wrong. So, while politicians today have no right to be ignorant of the evidence they have every right to ignore it.
These considerations should help to guide the interactions of research and policy which needs to be plural, attuned to the conditions of use, aware of its own limits and conscious of the need for synthesis and systems thinking in its application.
[i] I wrote an overview of their work here: https://www.nesta.org.uk/blog/celebrating-five-years-of-the-uk-what-works-centres/
[ii] For a full overview of this see chapter 5 in my book ‘The Art of Public Strategy’, Oxford University Press, 2007
[iii] ‘Another World is Possible: how to reignite social and political imagination’, Hurst, 2022
Professor Sir Geoff Mulgan, Professor of Collective Intelligence, Public Policy and Social Innovation, University College London, International Public Policy Observatory (IPPO)
The RSE’s blog series offers personal views on a variety of issues. These views are not those of the RSE and are intended to offer different perspectives on a range of current issues.