We all know capturing social impact is difficult, especially when you’re developing new models and trying new approaches. That’s how we all justify spending so long talking about it ;-)
If you’re working with a consistent cohort of individuals over a significant period of time then you can roll out all the nice traditional pre and post measures to track their progress. But, real life’s not generally like that. If you’re working with hard to reach clients sporadically or intermittently, how do you go about capturing and assessing the impact of the work?
We faced 3 key challenges with our work with UK online centres (read more about the project here
), if they sound like familiar then read on as we may have a solution for you.
- Challenge 1. How do you capture and evaluate the impact of a project when you’re working with hard to reach individuals, you may work with them for minutes rather than months, and where the last thing you want to do is hassle them with paperwork?
- Challenge 2. How can you understand the different ways that people articulate the social value of your work, whether as staff member, volunteer or participant?
- Challenge 3. How can you avoid people doing lots of extra work?
Answer – build an outreach monitor (What the hell is this? More detail below)
That’s the problem we faced in capturing the impact of UK Online Centre’s outreach project. Here the different centres look like the set list from EastEnders, including everything from pubs to chip shops, and most things inbetween. Some people would be working with the trainers for several weeks, others 10 minutes while they waited for their chips.
Inspired by Podnosh’s
award winning work with the Social Media Surgery
reporting, we worked with them to build a tool try and capture all the work that was happening.
The key was to find a way to capture what happens in the moment of learning or teaching, and to make that reflection and capture an integral part of the exchange. This is something you’d do in a learning context to cement the learning that has taken place anyway – a reflective summary of the learning that’s taken place. In our case it was as simple as saying, “so what have you learnt how to do?”
The trainer or participant (depending on the participant’s level of comfort in typing) then enters the information into a box on the screen. It asks a couple of details (their rough age and work status) and that’s it.
This simple exchange (that resulted in perhaps a sentence or two) multiplied over the 1000s of exchanges that took place over the project have generated a massive set of qualitative and quantitative data with which to understand the impact of the work. The trainer ID is attached to their outreach location so that every exchange they note down is attached to a geographic location and outreach centre. This means you can analyse and cross reference the nature of the exchanges that took place: in different geographic areas, among different work statuses, and for people of different ages. You can, in 1 click, go from an overview of a geographic region to reading the specific exchanges that made up that headline data.
It’s all quite overwhelming. This is the first time we’d tried this at the Trust and it’s very much a pilot project. There are some adjustments we’d make with the outreach monitor from a technical perspective, and while a good number of Outreach Centres used the monitor, it was by no means universal. We’ve learned lessons from this, which I will be blogging in the coming weeks. However the benefits of it as a monitoring and evaluation system are huge. We started working on it last September, and our research in the area led us to be in contact with the illustrious Marc Maxson and his work with the Cognitive Edge approach - I blogged about that here
So, it’s one answer to a complex problem. In moving forwards I’d like to know more of yours. What is your solution to working in similar environments? And what are your reflections on this one?