By Richard Longhurst, Peter Wichmand, Burt Perrin

The term 'evaluability assessment (EA)' is hardly one to start the mind racing and the heart beating. And if 'institutionalising within monitoring and evaluation frameworks’ is added, readers’ eyes probably glaze over very quickly.  This all sounds like yet more jargon brewed up by the evaluation profession. But the newly published CDI Practice Paper entitled ‘Building Evaluability Assessments into Institutional Monitoring and Evaluation (M&E) Frameworks’ fits in nicely with the developing work on assessing ‘complexity in practice’.  It is difficult for large international organisations to take ‘complexity’ on board as they design, modify and evaluate programmes.  This Practice Paper documents some experiences in trying to do just that.

The CDI Practice Paper draws on the last five years of experience in the International Labour Organization’s International Programme on the Elimination of Child Labour (ILO-IPEC), backed up by 15 years of methodological development. The paper documents programme experiences in El Salvador, Ghana, Peru, Philippines and Thailand. Interventions by ILO-IPEC are certainly complex: efforts to reduce child labour need to address its multiple causes, leading to a range of interventions in the areas of the enabling environment at national and sub-national levels, and targeted interventions in the form of direct actions with children, families and communities.

Benefits of an institutionalised evaluability assessment

What is needed is a means of keeping all stakeholders constantly informed and committed, working to the same Theory of Change and having the space to propose and implement changes, thereby to clarify the choices for future evaluation methods.   Sometimes doing this through a separate EA exercise, often by external consultants, disrupts that dynamic and does not always come at the time when stakeholders are ready. This is how an ‘institutionalised’ EA can help.

EAs are having their day in the sun, back in vogue after a flurry of activity in the 1990s.  Evaluability is defined as ‘the extent to which an activity can be evaluated in a reliable and credible fashion’. EAs aim to guide the planning, design, implementation and communication of evaluation activities. They should be integrated into existing monitoring and evaluation (M&E) structures which are beefed-up to have a greater focus on outcomes, to assess the influence of context- and intervention-related factors and to measure effects of outside interventions, both direct and indirect, and those implemented by IPEC and other organisations. A tall order indeed if it is to enable better decisions on the choice of evaluation approaches. So this is an approach well-geared to the resources of large organisations.

Institutionalising EA: Experiences from ILO-IPEC

In ILO-IPEC the use of comprehensive monitoring and evaluation strategies have allowed sufficient focus and resources devoted to M&E to institutionalise ongoing M&E.

In all of the country examples, involving projects over $5m, key decisions were made by ensuring stakeholder support for achieving outcomes, the agreed level of credibility for the forthcoming evaluation (including for the specific impact evaluation components), its timing and potential for use.

  1. In Ghana, the four year Cocoa Communities’ Project aimed to eliminate child labour by strengthening community action and social surveillance, and improving household’s livelihoods and children’s access to education. The EA assessed level of credibilityand choice of data and so worked as a learning tool, resolving competing ideas for the use of the baseline study for impact evaluation and also identified the combination of evaluation methods.  
  2. In El Salvador, the project focused on improving livelihoods, providing direct support to schools and sensitising them to the dangers of child labour. The EA provided justification for key decisions and overall framing questions at the design stage after stakeholder discussion, about which interventions the evaluation should cover.
  3. In Thailand with the donor pressing for a specific type of impact evaluation, use of the EA led to a redesign of the baseline, which instead then became documentation for the government for the incidence of child labour as required by the government. The EA addressed whether sample sizes were sufficient to make the specific required type of impact evaluation tenable, and concluding that there were not. It informed decisions on the feasibility of the impact evaluation.  Prolonged back and forth negotiations with stakeholders over this issue showed an integrated EA (rather than a stand-alone) best suited to the situation.
  4. Finally in Peru and Philippines, where ILO-IPEC provided technical M&E advice to other partners implementing projects, the EA was built into the comprehensive M&E strategy. It identified the impact evaluation methods to be used; the feasibility of establishing a clear Theory of Change; and the extent to which observable change could be expected within the time frame of the evaluation design. Carrying out an impact evaluation was also required to expand the knowledge base of the donor, vital in influencing design.

With the popularity of EAs now expanding (particularly with their use in the Sustainable Development Goals process) their integration within existing institutional set-ups improves the theory of change and stakeholder engagement, and is more effective in dealing with complexity. They allow early planning of evaluations, so making stakeholders more ’evaluation aware’. Coming up soon, the April meeting in Geneva of the UN Evaluation Group will focus on evaluability assessments of the Sustainable Development Goals. 

Richard Longhurst is an IDS Research Associate; Peter Wichmand is Senior Evaluation Officer, Evaluation Office, International Labour Organisation and Burt Perrin is an Independent Consultant, Vissec, France.

Image: Rice Terraces, David Fleming

Partner(s): Itad, Institute of Development Studies