By Chris Barnett

In this blog, the second in our series on ‘hot debates in impact evaluation’, we focus on innovation and learning around impact methodology. For over a decade, the focus of ‘evaluating impact’ has had strong advocates of a narrow set of quasi- and experimental methods – and yet, such methods are not appropriate for all situations, and social science offers many other robust ways of assessing causality and change. At the Centre for Development Impact (CDI) we are interested in innovating and learning from a range of designs and methods in order to appropriately evaluate impact. For us, innovation could be emergent methods, but it could just as easily be applying established methods in new fields, new sectors, and new locations.

Here are some examples of our recent work:

Using adaptive programming and complexity science to achieve impact

Ben Ramalingam’s book ‘Aid on the Edge of Chaos’ highlights how aid agencies ignore complexity at their peril. International development may certainly be complex (with multiple stakeholders, multiple pathways, multiple policy interventions, and so on), but what does this all mean for evaluators? How do insights from complexity science and systems-thinking change the way we think about evaluation and achieve impact? At CDI we have begun to explore the potential and limitations of such approaches (see our recent IDS Bulletin ‘Towards Systemic Approaches to Evaluation and Impact’).

Testing social science methods in ‘live’ evaluations

Since Elliot Stern and his team’s landmark paper on ‘Broadening the range of designs and methods for impact evaluation’, there has been much interest in emergent methodologies – including taking ones from a range of other social science disciplines. Methods like ‘Qualitative Comparative Analysis’ and ‘Process Tracing’ have recently risen up the agenda in development circles – liberally (and in my view, often poorly) sprinkled across evaluation designs. And yet, while some of these methodologies have a long history (and literature) in the social sciences, less is known about how best to use them for evaluation in international development. Our work at CDI draws upon recent and ongoing evaluations, where we are able to draw lessons about the real-life challenges and limitations – such as through recent workshops on Process Tracing and Realist Evaluation, as well as papers on ‘When N=1’ and ‘Natural Experiments’.

Mixed (and other hybrid) evaluation designs

Of course evaluating impact is more than merely methods. As the recent BOND paper makes clear, there is much more to think about when designing an evaluation. While mixed methods is in vogue, there are few examples of thoughtful mixed designs – i.e., design being not simply a collection of different methods, but rather a careful consideration of the evaluation questions, the characteristics of the intervention, the context, and how this informs an appropriate design. For example, it may be that two or three theories of change are simultaneously operating (e.g. one for a grant fund, another for particular grants), or that different elements of an intervention can be assessed using a counterfactual logic, while others cannot. At CDI, we making our contribution to thinking in this area, such as how best to frame different design choices, with examples of mixed designs in social protection, empowerment and accountability, and mobile phones and nutrition.

In summary, CDI aims to play its part in broadening our understanding of the ‘new science’ of evaluating impact. Do you have insights on designs and methods to share?

Chris Barnett is Director of the Centre for Development Impact, and Director of Itad.

Partner(s): Itad