This blog is written with the evaluation commissioner (clients) in mind at the stage where they have to assess whether a proposed design is going to be successful, but the advice is relevant for those designing evaluations and those designing terms of reference for evaluations too!
It might seem like an obvious thing: good evaluation design. But I have recently come across a number of evaluation commissioners who have not always understood what they should be looking for when it comes to evaluation design, either in the tenders or proposals that they have to review or the designs presented by contracted evaluators. Sometimes evaluators will try to bamboozle the client into accepting their favourite evaluation approach and the client is none the wiser.
So next time you are faced with having to choose a proposal for the evaluation that you need to commission, consider whether it has the following characteristics:
- The design is aligned to your Theory of Change and Log Frame or strategic framework: In most cases you want to know whether you’ve delivered against your Theory of Change and whether your interventions have achieved or contributed towards the change that you are working towards. So it makes sense that the evaluation design takes into account what you are trying to achieve and puts forward approaches, methods and tools that are geared towards finding out whether you have indeed made a difference;
- The key evaluation questions are aligned to your Theory of Change and Log Frame or strategic framework: This does sound obvious, but often I’ve seen terms of reference that pre-determine what the key evaluation questions (all 27 million of them) will be and that they will conform with the OECD DAC criteria, the DCED criteria, some other criteria, or worse: someone’s pet obsession. Why hire an evaluator if you’re going to ask all the questions before they come onboard? You need to ensure that the small number of focused questions do actually link into your theory of change, organisational performance or whatever it is you are evaluating. If a donor wants you to take the OECD DAC criteria into account for example, then you can do that at a later stage in the evaluation process, rather than force the design into the criteria model;
- The design is participatory, putting key stakeholders at the centre: the people who know whether or not your intervention has worked are the people affected by your intervention – be it beneficiaries, their families or households, the community leaders or community organisations working with you, your programme team. All these groups have to be included in delivering the evaluation in some way. Where possible, ensure beneficiaries can feed into the data collection process, either by providing data or participating in the data collection process. Ensure all stakeholders are consulted or included in some shape or form;
- The evaluator will want to include you and your team in the design process: The evaluation recommendations are going to be used by you and your team to make changes and improvements to the programme or intervention. So you should be involved in the evaluation design, informing the focus of the evaluation question(s) or informing the design and application of tools and methods. This is a great opportunity for capacity building your team in evaluation methods and tools and should be seen as much as a learning opportunity for you as a requirement either from the donor or the senior management;
- The design is costed and within the cost envelope: Sound obvious I know, but this point suggests two things: 1) you will advertise what the budget or budget envelope is when you’re sending out the terms of reference, and 2) evaluators can demonstrate that they can put a decent budget together with well thought through costings that appear reasonable and appear to have some flexibility in the budget for unforeseen costs. You don’t only want an evaluation you want one that is value for money and represents an appropriate use of the allocated funds. So make sure the evaluator can do their sums.
There are many different methods and approaches to evaluation and you will notice that I’ve not spoken about whether it should be an experimental, quasi-experimental, process, outcome, impact, formative, summative evaluation – that’s going to depend on specific situations, what point you’re at in the intervention, what the purpose of the evaluation is, how much money and time you’ve got, etc. The key thing to look out for is whether the methods proposed by the evaluator fit with your needs. Don’t worry if you didn’t understand some of the terms I used a few lines up, a good evaluator will either avoid jargon or explain it upfront if it is that important.
Do these characteristics chime with your experience? What characteristics would you use when looking for good evaluation design?