Text: aaa | Text only

Evaluating social innovation - the what and the when

On: 12th April 2013

People argue a lot about evaluation and ‘social impact analysis’ (see, they even argue about what it should be called).  They especially love to argue about the ‘best’ way to do it.  Like whether you should or shouldn’t have an external evaluator, or whether you should or shouldn’t do a Randomised Controlled Trial (RCT).  Frankly, it can get a bit confusing – so how do you know the best way to ‘do’ an evaluation?  Especially if you're working in social innovation. The answer is that there is no ‘best’ way. Different types of evaluation are appropriate at different stages of an intervention.  

So how do you know what is most appropriate? Read on. I want to share 2 ways you can decide what type of evaluation approach is appropriate at which level of intervention or social venture. 
 
eval scale drawing
 
1. Stage of Development 
Firstly, it can be useful to think about a scale showing the stage of development of your project. On left hand side of the scale is early stage or pilot projects. At this stage you are still developing, or possibly even beginning to create, your model and approach.  You are probably still not sure of the effects of your work.  (You may think you know about the effects, but the reality is that any social intervention never quite works out how you think it will. It’s unpredictable, messy and will have a range of intended and unintended consequences.)  
 
So the closer to the left you are the more you need to be considering reflective evaluation approaches that enable you to gather data and understand the types of effects you are having.  Cast the net wide; ask open and diverse questions about different areas your work could affect. If you just test for specific things (and go for rigid outcomes measures) you will be missing out on capturing unforeseen information on the other effects you work is having. Also at this stage you want to be more focussed on communicating the effects of the work internally so you can better improve your practice and delivery. 
 
At the extreme right of the scale are developed interventions. Here you’ve got an established, replicable model.  Having developed the model over time you should have had, or taken, the time to explore the intended and unintended consequences of the work.  As such, you should know the specific effects that it’s best to capture to articulate the value of your work.  This won’t be everything.  You will still be interested in delving a bit deeper but the focus is on capturing the desired outcomes specifically.  
 
This sort of evaluation is frequently outward facing as it’s concerned with capturing and articulating the social value of your work to others.  Here you begin to consider rigorous specific measures like Randomised Control Trials (RCT).  If you are developing a social innovation, which by its nature deals with an unknown space, you can’t possibly start here.  Though, in looking to scale and replicate you should certainly be aiming to have get here over time. 
 
2. The Cynefin Approach to Impact Assessment 
This all makes sense and it’s a good place to start.  But over the last few months I’ve begun to realise that we can generate a much richer understanding of what evaluation approaches are appropriate at different stages, and why, using the Cynefin framework from the lovely chaps at Cognitive Edge.
 
 
Cynefin framework
http://en.wikipedia.org/wiki/Cynefin
 
Of particular importance are the complex, complicated and simple spaces.  These all relate to how much you know about the environment in which you’re working.  
 
Simple Space. When have a significant amount of research about the context and effects of an intervention, we are in the simple space.  That is we are pretty sure that doing x, will have y effect.  (Nothing’s ever guaranteed of course, but it’s a good bet).  This is ‘best practice’.  Here you can focus most of your measurement on what you think will happen. You can consider using approaches like RCTs, because you know the effects you’re looking for, and you need the most robust method to detect them.  The purpose of the evaluation in this space is mostly outward looking, about capturing and articulating the impact of the work. Less effort needs to go on internally reflecting on the work as most of this will have been done already.  
 
Complicated Space. However, sometimes you know the context you’re working in quite well, but there’s still got to be a bit of debate and decision about the best way forwards.  You’ve got several different possible paths to take to address a social challenge that all seem feasible.  This is the complicated space, and where ‘good practice’ applies. Here again, you focus on measuring what you think will happen so measure your sought outcomes.  However you need to capture data on the efficacy of different approaches.  You might consider a/b testing (where you run 2 slightly different version of an intervention and see what gives the best results).  ALso you still need to be mindful of your the unintended consequences of your work or those things that don’t show up with established measures, so need to balance this with open data collection.    
 
Complex space. Here we have places where, frankly, we’re really not quite sure what’s going on.  We know what we’d like to happen, but got no strong evidence that an intervention can, or more importantly, how it can, cause the effects that we want.  This is the complex space, and is where social innovation happens. (After all if we knew what was going to happen, it wouldn’t be innovative). Here if you spend all your time measuring what you think will happen you will miss the forest for the trees. You have to have evaluation approaches that can detect the unforeseen and unexpected effects of your work. Without these you will not build a complete picture of how change occurs for different players in the environment. Here you need to be thinking about open captures that are not restricted to looking for change according to specific outcomes. There are myriad techniques, but at the moment the cognitive edge approach is the most effective in my mind (read more about our experiments with it here)
 
What’s crucial is to recognise that no one evaluation approach is better than another.  It’s about picking the right evaluation approach for the stage of development of your intervention.  And that is the approach that can offer you the information you most need at that time.  Early stage ideas need more open data captures because you are still informing the development of your intervention and it’s specific effects.  Later stage projects should have this information, so are more interested in testing for specific effects of the work. 
 
By the way, the chaotic space? It's important not to ignore it, but for now, just notice that it's next to the simple space. The little squiggle at the bottom represents a cliff edge.  If you carry on treating interventions as simple and only using simple measures you will not be able to notice changes in the environment you are working in.  Do this too long and your measures will be out of date, people will game them, and then you'll have no idea what's actually going on.  There's only one way that can go... 
 
A version of this blog first appeared in the Guardian Voluntary Network 9th April 2013