I love evidence based interventions. In fact if I am arguing with my girlfriend about something other than the washing up, one of the first things I do is ask, what’s the evidence for that? (Which I am sure makes me an absolute joy to live with).
So if you work in evaluation, or do evaluations, and everyone is asking you for evidence of social impact you begin to live in fear of that moment when someone taps you on the shoulder and says “so why aren’t we doing an Randomised Control Trial then? Surely that’s the only way we’re going to know if this works?” At this point you either shout about how unsuitable Randomised Control Trials (RCTs) are for social interventions (often on principle) or wince and slope off back to your desk, feeling like you’re being called a charlatan, and wonder how the hell you’re going to conduct an RCT with a tiny budget and when you’re still having trouble getting people to make a distinction between outputs and outcomes.
Well thanks to some amazing chats with Ed Mitchell (Transitions Network
), Marc Maxson (Global Giving
) and Matt Baumann (NESTA
) I’ve realise that I no longer need to have that fear (and nor should you) and realised some important things that move the RCT debate past ‘should or shouldn’t be used’ bandwagon.
RCTs are great, if you know exactly what you’re testing, if you have a strong model, have trialled it, and are looking to move to the next stage, such as a large scale roll out. After all RCTs are expensive, so you have to have a very clear purpose and reason for doing them.
But not all of us are in that place, and in fact most of the programmes we deal with aren’t at that stage at all. The reality is that if you’re experimenting with new models of the way technology can address social change, you’ve really got very little chance of evaluating that with an RCT. There are so many variables at play and adaptions to be made along the way; frankly, you probably don’t even know how you will replicate that model in another setting, let alone set up a robust RCT. RCT (as valuable as they might be) would be the wrong thing to do at that stage of iterative development of a model of intervention. Sure if you’ve piloted a project, established how it can be replicated, have a distinct model and are looking to scale, go for it. But in reality, by definition of our work in looking for new and disruptive approaches to address social challenges, we rarely see a project like that. And to be honest, that makes me glad. Too much of the funding world has asked projects to decide exactly what they are going to do before they’ve gotten out in the field to actually do it.
So we can all breathe a sigh of relief that if we’re not doing an RCT we're somehow betraying the great evaluation Gods. Quite the contrary, and in fact if you need any extra ammunition to fight your cause have a look at the Medical Research Council
's guidelines on developing and evaluating complex interventions published in 2008. While acknowledging that RCTs are still very important, they identify that they are not always realistic for complex interventions.
So, what evaluation structures are the most appropriate for the type of work we do then? When we have work developing as it’s delivered. Well, I’ve already talked about the role of project evaluation in communication and comparison here
. And the benefits of using traditional evaluation techniques like logic models and theory of change here
. But there’s something else. The role of evaluation in creating feedback loops. This is for performance management, or as many of us like to call it, ‘knowing how well you are doing’. This is where the wonderful work that Marc Maxson at Global Giving is doing on storytelling comes in. Come back in a few days to find out more about that here. In the meantime I’d be interested in hearing more about the needs you have for evaluation in you work. Is it something that you'd want to use an RCT to evaluate?