Oh yes it is! I know you’re thinking that sounds weird: evaluation isn’t about management is it? It’s about the programme or project and about reporting to donors and the board and about generating evidence, isn’t it?  No, not really.

Let me clarify: yes evaluations are used for all of those purposes, but they are an incredibly useful and powerful tool for programme design, organisational development or even change management.  Or at least they should be.  There is a growing consensus internationally around evaluation and the standards that it should adhere to.  I have already mentioned how I use the OECD DAC criteria during my evaluation process in my blog about my golden rules for evaluations.  There are also four standards for good evaluations: Utility, Feasibility, Propriety, Accuracy.  These have evolved as evaluation associations (yes they do exist), bi-lateral funders, donors, NGOs and others have come together to try and standardise evaluation practice.  The first of these standards, Utility, is what I’m focused on in this blog. Evaluations have to be useful.  And they are, more so than most NGOs are prepared to consider.  There are two main reasons behind this thought: improved programme design and recommendations made in evaluation reports.

Programme Design

It is true that many NGOs view evaluation as an activity that happens at the end of the project process only.  But increasing numbers of NGOs are realising that in order to ask appropriate evaluation questions at the end of a project, you need to be improving the design of the project in the first place, to ensure you generate appropriate results and outcomes that can be used by the evaluators.  Of course an evaluator will generate their own data anyway to test your results, but the process is more useful and the recommendations from the evaluation more useable, if the original design has been informed by evaluation from the onset.  So evaluators are becoming more involved in advising on programme design.

Evaluation Recommendations

Every evaluation tends to make recommendations as a result of the evaluation process.  Unlike monitoring reporting, these recommendations tend to focus on substantial changes either to the programme, the organisation or the theory of change, etc; while monitoring reporting focuses on what tweaks need to be made to keep the programme on course.  The evaluation recommendations are usually quite robust and in some cases, are quite challenging to the status quo.  Evaluators don’t often get to see whether their recommendations are implemented or not (we don’t always get to see the management response that get submitted to donors), so we don’t know whether the recommendations are actually implemented.  I did have an experience quite recently where I got to revisit an NGO that I had made quite strong organisational recommendations to on a previous project.  None of the recommendations had been implemented and the existing problems had just got worse.

Of course, evaluators cannot be let off the hook here.  We need to make sure that evaluation reports and data are presented in a way that is accessible and useful.  Which can be a challenge when some of the data is complex and quite dry.  The results and the evidence generated by evaluations are useful tools for driving forward organisational development and improvements.  After all, we all want to be better at what we do, so that those people and communities that we help will have better outcomes and results, which is what we are in this business for isn’t it?

How are evaluation reports used in your organisation?  Do you think they can be better used or even better presented by evaluators so that they are more useful?  Have you ever had experience of an evaluation report or its recommendations being used proactively by management?


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.