This report highlights the paradox within impact investing: the prioritisation of ‘social impact’ without prioritising ‘impact evidence’.
Policies and interventions aimed at influencing complex, multi-actor and dynamic processes of change (democratic reform, peacebuilding, market transformations, equity and gender reforms, climate change, etc.) require new evaluative approaches and methodologies. There is a largel unrealised potential for robust impact evaluation designs, and adapting emergent or utilising proven methodologies from other disciplines. New combinations of mixed, hybrid and nested designs are possible, and approaches can be derived from best international evaluation practices and successful multi-disciplinary research.
The focus of the research is on:
- Exploring how complexity science can contribute to evaluating impact.
- Developing emergent evaluation methodologies from the social and natural sciences.
- Developing ways to assess impact where control groups & large-n data is not possible.
- Learning how best to evaluate the impact of complex interventions in different contexts
- Furthering the debate around the application of systems-thinking and complexity to evaluation
- Learning about how systems-based approaches are applied in international development
- Sharing understanding about the strengths and limitations of systems-based approaches
This report explains the methodology used to analyse the demand for evidence and accountability within the impact investment market.
This paper by Adinda Van Hemelrijck and Irene Guijt explores how impact evaluation can live up to standards broader than statistical rigour in ways that address challenges of complexity and enable stakeholders to engage meaningfully.
Latest blog posts
Qualitative Comparative Analysis (QCA) has taken a prominent role in European evaluation discourse and practice over the past few years and was a hot topic at the 2016 European Evaluation Society Conference.
At the European Evaluation Society 2016 conference (EES2016), I found myself attending almost exclusively sessions related to Qualitative Comparative Analysis (QCA). Two years ago, QCA was still the newbie at EES, but with seven sessions on the topic at this year’s conference it’s clear that the approach has taken a prominent role in European evaluation discourse and practice.
The term 'evaluability assessment (EA)' is hardly one to start the mind racing and the heart beating. And if 'institutionalising within monitoring and evaluation frameworks’ is added, readers’ eyes probably glaze over very quickly. This all sounds like yet more jargon brewed up by the evaluation profession. But the newly published CDI Practice Paper entitled ‘Building Evaluability Assessments into Institutional Monitoring and Evaluation (M&E) Frameworks’ fits in nicely with the developing work on assessing ‘complexity in practice’.