Evaluation should be about testing models, not organisations
Who wants to shout about something that’s gone wrong?
We have a mission at Nominet Trust to understand how the internet can be used as a force to disrupt social challenges and create positive change (when I write this, it does vaguely sound like it should also be the motto of a lesser known slightly geeky comic book hero, whose secret power is wifi.)
A lot of our work is about looking at new models for change. However, by their definition, not every new model is going to impact on a social challenge as deeply as another, and even if a model has worked in one place there’s no guarantee that it’s going to work in another. Scaling and replication is not a simple business.So, to even have a chance of understanding how these different models of change work in different arenas we need ensure that we capture the learning from projects that haven’t gone as planned, as much as those that have.
But there’s the rub, no one wants to shout about things that have gone wrong. It’s not easy to report that internally, let alone to a funder or even the wider sector. So how do you systemically go about establishing a culture that welcomes information on what didn’t go so well as much as stuff that did?
Firstly, I think it’s important to separate out project monitoring from evaluation. For me, monitoring is to do with project management, checking everything is run as it was intended it to be (and someone hasn’t run off to Hawaii with the investment). However, evaluation is about testing a model, not an organisation. It should be about testing our assumptions about a theory of change and the extent to which the model achieves its outcomes. As part of this a project needs to set out an evaluation strategy that identifies the sought outcomes of the project, with appropriate and realistic measures and indicators to capture progress against these. These outcomes indicators need to resonate with the project delivery team (and potentially other stakeholders like the beneficiaries) and reflect what they think the project is trying to do, otherwise the evaluation will just be a paper exercise. The strategy also needs to highlight the assumptions that underpin the project so it can reflect on how these affected the extent to which it achieved its outcomes. It’s only by testing these assumptions that we can go about thinking about how the model could work in a different environment.
Lastly, the learning about the effectiveness of the model needs to be shared with the wider community, that way others can learn about what works directly from project organisations. To be honest, we’re still working on how to do that at NT. If you have any thoughts they would be greatly appreciated. We know it will be the role of the knowledge centre but openly sharing information is a complex area. It’s hard enough to get people to report on stuff that didn’t work as well as they wanted to a funder, let alone a wider audience.