What policy learning tell us about impact’s blind spots
- Tertiary Education Futures Blogs
- Publication Date
- Professor Claire Dunlop
Using the policy learning literature in the social sciences as a lens on impact is revealing. Here, I sketch three blind spots of impact which learning accounts bring into focus. There are more, no doubt. But, I think the combination of these three is central to the challenge of rethinking impact. That impact has its own narrow telos focussed around change, is success oriented, and reifies an expert/beneficiary dyad are all in tension with the reality of how social learning operates and what change can be.
Before I get started, I should say these blind spots are not new. They have been there since the start of the impact experiment in UK higher education fifteen years ago. What is new is that now we have a sense of how the tensions they produce are playing out empirically and the policy economy of impact – who gets funded, what gets reported and what doesn’t – that has been created.
This blog is part of the research project Rethinking Policy Impact
A UK-wide conversation on the principles, goals and approaches that should guide the policy impact agenda in higher education.
1. Change and the problem of impact’s telos
Conceptualisations of policy learning were, for a long time, based on two restrictive assumptions: learning is a binary thing and the policy change flowing from it is measurable. Thus, the analytical tendency was to direct energy toward isolating learning variables from other factors (interests, institutions, events etc) and track its effects through counterfactual reasoning and so on. These days, we think less in terms of presence or absence of learning but rather assume learning in public policy is always present in some shape or from. Allied to that, greater recognition of non-linear conceptions of the policy process ensures learning analysis is now more able to offer more honest, holistic and messy views of what combines when knowledge shows up in outcomes. In short, these days we attend to the ‘mode’ of learning – what actors (plural) are generating knowledge in an area, how the problems to which lessons are applied are framed and what solutions lessons are directed toward (Dunlop and Radaelli 2013). Analysis focusses on the variety and combination of causal mechanisms at play rather than trying to artificially isolate a single moment where learning matters.
That old binary and mono-causal thinking underpins the current conception of impact in UK higher education. Note, I am interested in how impact operates on the ground in our universities. While the REF case study guidance does include a fairly expansive definition of impacts, its interpretation on the ground is defensive and risk averse. Rules in form are one thing, their implementation in use is often quite another (Crawford and Ostrom 2005). And so it is with impact. What results are narrow and conservative visions of what academic research can do and actually does for the policy world. In its current form, as a term, impact operates by drawing a comparison. ‘That’s not impact, it’s X (this blank is often filled with ‘dissemination’ or ‘engagement’) or ‘Not enough has changed for this to be impact’ are corrections many of us have heard (and given) when offering up what we thought could be an impact possibility. To posit impact, we must ask: compared to what? Impact involves an interpretation that: a) here we have something different from the more familiar academic activities of dissemination, engagement, esteem, research, teaching etc and b) demonstrable changes in something tangible can be linked directly back to academic research (or an academic). Yet, judging the boundaries of these is a subjective business. Each academic, discipline, sub-discipline, etc will understand these in different ways. In the face of this ambiguity, audit guidelines have drawn definitional boundaries which, by in large, have created a tight landing strip for impact. These demarcations imply impact it has a telos – a correct form and direction – which set it apart from alternatives like dissemination, and where change in policy is the ultimate prize.
The result is a conservative way of thinking about impact in universities and a homogeneity in impact case studies (Dunlop 2018). To illustrate, in REF2014, politics and international studies scholars mainly informed powerful institutional actors – decision-makers, governments and Parliaments across the UK – on bread-and-butter policy issues. So, at least, we have some degree of confidence that academic research is informing policy. But, when we examine what is being affected it is not policy itself but rather the wider climate of policy. Moreover, studies addressing complex social challenges – poverty, sustainable development etc – which are less amenable to simplistic conceptions of causation and change are thin on the ground (Dunlop 2018, 2019).
With all this focus on satisfying a tight definition, we might well wonder about the intellectual losses racked up as a result given the possible impact stories that go untold and lessons not learned. Ironically, losses of possible benefits to society and polity are also incurred since investments (from IAAs, impact funds etc) in researchers’ time and resources are often not forthcoming for activities which don’t meet the narrow definition of impact.
After over a decade of the impact experiment, the worrying possibility is that impact only exists if we can operate within this normative course of action. For the researcher, sometimes working closely with the ‘beneficiary’ and sometimes not, this means using and sometimes crafting knowledge that works toward a policy output. All of this assumes that we, academic and / or beneficiary, already know how policy in an area should function. We are back then to binary ways of thinking and within paradigm working. What remains hidden and stymied is the broader and transformational potential of academic research.
2. Impact is not always a ‘good thing’
The normative assumptions embedded in impact share some similarities with those of learning approaches in policy studies. Only very recently, have we begun to acknowledge the normative neutrality of policy learning (Dunlop 2017) leaving behind old ideas that assume policy effectiveness results from learning (Etheredge and Short 1983). That knowledge utilisation in policy-making may not always be a ‘good thing’ is something long known in policy evaluation scholarship. After all, Carol Weiss and Michael Bucuvalas (1980) famously spoke about ‘policy endarkenment’ over four decades ago. This penny has finally dropped in policy theories where there is a burgeoning literature on learning and policy failure pointing to: problems of assuming knowledge is up-to-date, impacts of cognitive biases, problems of analogous reasoning, dangers of researcher becoming guns for hire etc.
This darker side of knowledge and policy is another blind spot when it comes to impact. In their current form, impact case studies are success stories where impact is always a good thing. Yet, the challenge of being sure of research quality when assessing impact is not straightforward in the REF system as it stands (where the research quality threshold is asserted in the submission and not formally assessed). I should say, this is not a plea for more assessment! Rather, it’s a nod to the curiosity noted a long time ago by Dunleavy (2011) – whereby (apparently) 2* research can underpin 4* impact. This is, in part, what Weiss and Bucuvalas had in mind when they spoke of endarkenment.
Even where we are assured of quality, bodies of research may be cherry-picked to support pre-existing preferences (such motivated reasoning approaches might fairly lead an impact assessor to conclude ‘that was going to happen anyway’). Perhaps more darkly, in their discussion of ‘grimpact’, Derrick and Benneworth (2019) point to possible pathologies where academic research is used to support policies which are socially harmful. Here again, how change is defined creates this blind spot. By linking impact to change which is countable, the broader nature of that change for society and polity is ignored. Within our academic communities, acknowledgement of darker side of impact and exploration of our ethical boundaries is needed (see Cane 2019 for a thoughtful contribution on this).
3. It takes a village … and it often takes that village a long time
Finally, one of the enduring lessons in learning research is that: where change happens at the deep level of policy outcomes it is frequently the result of a community of actors puzzling and interacting over extended periods (Dewey 1927; Hall 1993; Heclo 1974). Certainly, transformative change is rarely the result of a burst of epistemic learning removed from wider social forces. In fact, this relationship is empirically less prevalent than those involving broader configurations of stakeholders, citizens, policy advocates, officials and (yes) academics and their research.
Impact then is an intensely social process (see Smith et al 2020 for empirical exemplifications of this point). And, while part of the experience of ‘doing’ impact is being part of these complex interactions, the business of reporting impact is a stripped back fiction of experts and beneficiaries where impact is a moment in a prescribed window of time. Even if we assume that impact is just about epistemic learning, the presumption of expert as ‘sufficient’ and beneficiary as ‘in deficit’ is simplistic to say the least (but undergirds many impact case study narratives). Moreover, familiar but unwelcome academic and social hierarchies are baked-in to impact. Under-represented groups in academia face particular challenges when doing impact (Dunlop 2018; Savigny 2019) which are yet to be acknowledged and unpacked in impact policies (for example, it is absent from UKRI’s 2020 big review on EDI and research). These disadvantages are revealed in who gets to say they do impact and who benefits from it. For example, the outdated model of the solo (often white male) hero academic is repeated in REF case studies despite everything we know about academic research bearing the imprints of many researchers over time.
Yet there is still the more fundamental problem that when it comes to learning, the expert/beneficiary dyad is just one relationship among many. Impact is multi-actor and multi-causal. While a stripped-down dyadic account may score highly in the assessment system, we diminish our social understandings by focussing on only one link in the chain. By calling something impact, as it is currently conceived, we risk over-determining the significance of (some) academics in policy-making (see Lindblom and Cohen 1979 for the classic and trenchant account of this). What we obscure are processes of social learning and transformation which involve a wide range of actors interacting and co-creating ideas unfolding at a slow pace. Opening-up impact reporting in ways that makes room for context and involves this wider range of social actors would produce more insightful understandings of whose ideas matter and why. It would also result in more adventurous and realistic accounts of the importance of academic research in our world.
Claire Dunlop is Professor of Politics and Public Policy at the University of Exeter. She is currently Vice-Chair of Political Studies Association of the UK (PSA).
The RSE’s blog series offers personal views on a variety of issues. These views are not those of the RSE and are intended to offer different perspectives on a range of current issues.