Evaluation...lost in translation?
On: 30th January 2012
In my last post, The quest for the holy metric? I argued that the heart of the shared measurement issue in social development is not an evaluation problem, it’s a translation problem. What we really want to know is how can I translate the social value that I have captured into a language that you understand so that you value the work too? That way we can keep the integrity of bespoke evaluation measures, but the value can be commonly understood.
One approach currently taken in the sector (take SROI for example) is to use proxies to translate value into a currency that we all understand. The problem with that is that the currencies or proxies that are being used do not always translate very well. It takes a lot of steps to attach a proxy to a measured outcome in the evaluation process, and even more for someone else to translate it back from the proxy to their own value system.
A second approach is to create a shared umbrella that we can all align our values under, such as a common outcomes framework. There are some attempts to do this and personally I wonder if the well-being approach might be one way to achieve it. However, using common frameworks or measures always risks us ending up in the situation of having projects that are expected to fit standardised frameworks and measures. This means we can end up with Bernard Shaw’s ill fitting suit. There is also the ever present danger that it will make people perform to a measure – as articulated by Goodhart’s Law. http://en.wikipedia.org/wiki/Goodhart's_law
So if not these, what’s the other option? This is where we come to translation. I suggest an alternative way is to take a captured change that is meaningful for us and allow other people to view it in a way that is meaningful for them.
The exciting bit is that firstly we are already doing that in the sector, and secondly with the availability or large data sets, we will find it more and more possible.
Here’s a brief example of this type of translation that is probably happening already. A young person at risk of exclusion is engaged in a programme that aims to improve ‘soft’ skill development. The organisation running the programme will articulate the value they have created by reporting how the young person has increased in a set of ‘soft’ skills. They may also think about how to ‘translate’ this into language that the school values, for example increased engagement in school, increased academic achievement, less exclusions. Correlating the young person’s relative progress in both areas would be how to translate and communicate that value.
However it’s not always that clear, or that easy, to get the information that helps us translate movement against our measures with movement against others.
This is when the power of contextual data comes in. That is data drawn from a lot of different sources, including open data. For the first time ever we have access to masses of data drawn from government and myriad other sources available for us to contextualise our evaluation findings. Further still, we can get at the data that underlies other’s analysis and presentations of data. (This is not to say that other’s analysis and presentation of data is not useful, it’s just that we might want to use the raw data for a different reason). Tim Davies introduced me to these models which help demonstrate this idea.
Diagram 1 - the ‘traditional approach’
Diagram 2 - a Power of Information model
Power of information Task Force Report (2009) accessed 16th Jan 2012 at http://webarchive.nationalarchives.gov.uk/20100413152047/http://poit.cabinetoffice.gov.uk/poit/wp-content/uploads/2009/03/poit-report-final-pdf.pdf
It shows that rather than having to only look at people’s interpretations of data, we can actually go to the raw information itself. So for the first time in history we have access to a wide variety of sources against which we can compare our evaluation findings, and with which we can find ways to translate value.
If you want to know more about how to get stuck into this kind of work, Tim Davies worked with the Nominet Trust on an ‘Open Data Day’ and wrote up an excellent description of what we did on his blog. It contains a lot of valuable links and guidance, including pointers to the type of tools that can help you to work with open data. http://www.timdavies.org.uk/2012/01/10/exploring-open-charity-data-with-nominet-trust/
Of course there are a few of things to note on this before leaping on contextual data as an answer to the common measurement problem. Firstly, a challenge of using contextual data to support evaluation is not falling into the ‘correlation = causation’ trap. Just because there’s a trend in contextual data, it doesn’t mean you are responsible for it. Let’s say for example you are running a youth offending project with 12 participants. Just because there is a drop in reoffending rates in the local borough does not mean you can claim this is a result of your project! However, you could take average reoffending stats for the demographic you are working, and identify if participants on your project have lower reoffending rates than would be expected (i.e. using appropriate contextual data as a baseline measure). Secondly the data itself is not all naive and innocent. It’s already representative of values and is laden with assumptions. For example, stats that relate to ‘getting people online’ could mean anything from them picking up a mouse to integrating technology into their daily lives. We always need to interrogate where the data came from. Lastly we should always be using contextual data to support our understanding of our evaluation findings, not the other way round. Otherwise we could get caught in the trap of looking at what other people are capturing and thinking that’s the only way to measure things. This is the equivalent of having common measures which, as we’ve established, may lead to an inappropriate evaluation strategy that doesn’t capture the change we want to measure.
So while contextual data is not the holy grail, it does throw light on a different approach to shared measurement. That is, rather than focussing on how we get everyone to use the same tools, we think about how we can translate between them. Some people are already using contextual data in their evaluations. The problem is that it has been time consuming to source contextual data, and often we don’t get original figures, rather we get the interpretations that someone else has layered on top. Now, for the first time, we have access to vast sources of this data which we can interpret with simple software tools (e.g. Google Refine). That means we can find more languages to translate our own findings into, and better communicate our social value.