How different governments draw on expertise and evidence and why it matters for impact
- Rethinking Policy Impact
- Publication Date
- Dr Will McDowall
Higher education in different countries engages with very different systems of policy advice. In recent work for the Institute for Government (IfG), a colleague and I explored how different advisory systems work for energy policy. Our findings highlight some of the ways in which advisory systems differ, and why such differences might matter for understanding ‘research impact’.
My work at IfG was prompted by a concern that the UK was making what sports commentators might call ‘unforced errors’ in energy policy: stalled energy efficiency policies, expensive (and late) nuclear power plants, and fiascos like Northern Ireland’s ‘cash-for-ash’ renewable heat scandal. It is tempting to think that, if only government was better at using evidence, such policy failures could be avoided. And perhaps other countries do it better. I was asked to examine whether the UK was using evidence effectively, and whether it could improve by learning from other countries and how they draw on evidence to inform energy policy.
We found that the UK has a pretty good story to tell. The main energy department—the Department for Business, Energy and Industrial Strategy (BEIS)—has a strong cadre of analysts, clear internal processes ensuring that evidence is gathered and considered in policy design, and it has internationally impressive capacity in key analytic areas such as energy modelling. But while the UK has strong analytic capacity in government, we also found that the department risks being too insular: there is lots of internal analysis, but too little willingness to seek out diverse viewpoints, creating a risk of ‘group think’. Civil servants too rarely reach out to expertise beyond government, in business and academia.
This blog is part of the research project Rethinking Policy Impact
A UK-wide conversation on the principles, goals and approaches that should guide the policy impact agenda in higher education.
A very different system is found in the Netherlands, which relies much less on internal expertise, and much more on external bodies. The Dutch have built an elaborate array of expert panels, advisory councils, and—the jewels in the evidence crown—the ‘planning bureaus’. Close to government but fiercely independent, these bodies provide ostensibly neutral, expert advice. It is these expert bodies, rather than civil servants within mainstream government departments, that produce and digest much of the evidence that informs policymaking. Civil servants working in the energy policy department characterised themselves as generalists and ‘process managers’ rather than as energy experts.
Like the Dutch, the German system is also more reliant than the British on external expertise. But unlike in the Netherlands, in Germany we came across widespread scepticism towards the idea that evidence for policy is ever impartial, purely expert or non-political. Germany’s political system involves many veto players, and a process of consensus-building and negotiation between government departments (often, in coalition, led by different parties), and between federal states and the federal government. While evidence is a key ingredient in this messy process, different players are seen as coming to the table with ‘their’ evidence. The result appears to be a much more explicitly political process of negotiation and contestation, in which many perspectives—and diverse evidence—are brought to bear on policy questions.
These brief caricatures illustrate that each country has a different ‘policy advisory system’, each with strengths and apparent weaknesses. The UK’s system—with its reliance on internal expertise—ensures that policy teams can work hand-in-hand with analysts to question evidence and test their assumptions. But the UK system lacks the diversity of perspectives that can come through the externalisation of evidence-building that we saw, in very different ways, in the Netherlands and Germany. As a result it risks taking too narrow a view.
What does this international diversity of policy advisory systems mean for the impact agenda? First, and most obviously, we should be cautious about simple comparisons between countries’ approaches to rewarding and encouraging ‘policy impact’ from higher education. That approach focuses only on the ‘supply side’ of evidence from academia. But each country’s policy advisory system is different—and differing demands for evidence from academia will play an important role in shaping the routes to policy impact.
Second, different policy advisory systems may influence how difficult it is to assess policy impact. When much of the work of synthesising evidence and applying it to policy problems is conducted inside government departments, typically in unpublished internal analysis, it is difficult to demonstrate that any particular body of academic work has had an impact. This is especially true when that impact is intangible, providing government policy officials and analysts with a greater understanding of the policy problems and possible solutions, rather than providing evidence that specifically informs a particular decision (a distinction highlighted over forty years ago by Carol Weiss). In contrast, academics sitting on formal external advisory bodies, of the kind relied upon in the Netherlands, can show more readily that they have influenced policy.
 Hustedt, T., & Veit, S. (2017). Policy advisory systems: Change dynamics and sources of variation. Policy Sciences, 50(1), 41-46.
 Weiss, C. H. (1979). The many meanings of research utilization. Public administration review, 39(5), 426-431.
Dr Will McDowall is an Associate Professor at the Energy Institute and Institute of Sustainable Resources at University College London
The RSE’s blog series offers personal views on a variety of issues. These views are not those of the RSE and are intended to offer different perspectives on a range of current issues.