Monitoring & evaluation

This webpage is for trust fundraisers with three or more years’ experience. Beginners should use this page instead.

.

  • What are you monitoring & evaluating?
  • Look at whether you can collect data that correlates, rather than just relying on one source
  • There may be established monitoring tools that, maybe with a little adaptation, can lift the quality of your monitoring
  • Consider theory-based evaluations
  • Consider a M&E cycle, to stop your data sounding a bit divorced from the practicalities of your project

What are we monitoring and evaluating?

The big focus in trusts is on outcomes, but a full answer on M&E could cover:

  • Service users (including equalities measures)
  • Process evaluation
  • Quality measures
  • Impact measures
  • Where there are very significant issues around project implementation – for example, cost control on capital projects, timings for highly innovative work or numbers of volunteer groups successfully set up – it could be worth explicitly including these in the monitoring framework

As such, you may want to cover six types of data:

  1. User data: how many people are using a service and their demographic characteristics, such as age, gender, ethnicity, etc.
  2. Engagement data: how often people use a service, and for how long.
  3. Feedback data: what people think of the service and their experience of it (from users), and an internal assessment of the quality of the service provided (from staff and volunteers).
  4. Outcome data: the capabilities, strengths, assets, knowledge people gain as a result of the intervention.In the case of environmental projects, measures might be physical, such as kilometers of river restored or reed beds installed or the area of wetland created.
  5. Impact data: what long-term change users experience beyond the lifetime of the intervention
  6. Data to enable you to monitor implementation issues, where appropriate. (However, the rest of this webpage will be about the user experience.)

Charities with very sophisticated approaches to monitoring may actually have a data/insights team that they can involve to consult / pick up some of the work / get the best model. Often, though you’ll be in the same situation as in other areas: trying to work out what’s the best you can get in the bid, in the time available, that’s deliverable and fits the charity and the staff in practice.

The role of targets in the proposal

There are actually two ways you can have targets:

  • As a “promise” to the funder; give us the money and this is the difference we’ll make. I’d personally say that, in order to build trust, targets should be conservative, rather than high to make the work look great value for money.
  • To provide a direction of travel for the project. Occasionally, with very new projects, some targets aren’t really there as a promise, but as a management tool, providing focus. For example, I secured the funding for one of the UK’s first projects working with older people sleeping rough or in homeless hostels. No one really knew how many people had reached the point where they’d be prepared to move on in their lives, but targets would give a sense of how many people we wanted to try and work with, was it a high intensity or low intensity initiative, was it more about their health, or getting them off the streets, or what? If this is what’s going on, you’ll need to be very explicit with the funder. It’s an unusual way of working for trusts, but one that a sophisticated funder of innovative projects will be able to understand.

Outcomes mapping

NPC think you need to:

    • Map your theory of change. This shows what you want to achieve, but also the causal links that enable you to help service users get there
    • Prioritise what you measure. The chances are, you will help different people in different ways. If you find the most important outcomes from your theory of change, you can see how far you achieve them. People also value what they measure, so these choices will give your work focus.
    • Choose the level of evidence needed. This should be both practicable and what you need, for the work and for the funders
  • Choose your sources and tools

Different sources that correlate

A very sophisticated monitoring strategy will rely not just on one source of information, but on multiple sources, coming from different angles, that correlate. This will help you “reality check” what your data really means.

More sophisticated tools

1.  Going professional

If you want a swanky-looking monitoring strategy, you can use an established, professional, monitoring system, rather than just making something up yourselves around the key outcomes.

There are specialist tools out there for many, many fields. You just need to Google. To list a dozen examples:

  • Mental health & wellbeing: Wembws & many other mental health and wellbeing tools designed for use by the layperson. Impactasaurus has a great starting range of easy to use questionnaires
  • Older people’s wellbeing: Leaf-7
  • Social care-related quality of life (SCRQoL): ASCOT 
  • Mentoring & befriending: NCVO Knowhow lists lots of tools
  • If doing a very long evaluation report, consider Inspiring Impact’s Data Visualisation guide
  • Domestic Abuse: Insights, from SaferLives
  • Employability in young people – JET (Journey to EmploymenT)
  • NHS charities: NPC has a 2016 guide to useful tools, called NHS Charities: Evaluation Guidance
  • Organisational health: Charity Excellence Framework
  • Health: General Health Questionnaire
  • Housing & Homelessness: Value Insight; HomelessLink’s Practical Guide to Outcomes Tools
  • CVSs: 27 indicators of Backbone Effectiveness
  • Soft skills: SOUL Record

You can very often modify these tools, so that they more precisely fit your project and its aims.

2.  Outcomes stars

A nice approach is to use some form of Outcomes Star (sometimes referred to as a “spider” rather than a star). What these do is to visually display several different factors related to the core issue. You get the service user to score themselves before, at regular intervals in many cases and then after and the Outcomes star shows where things are, visually, for example:

If you use a professionally produced star – for example, the ones produced by Triangle Consulting – they normally have carefully formulated and clear meanings for the different scores, that make things more precise. That means that, when someone moves three points along the scale, that actually means something, rather than being very subjective.

As well as being more holistic, because they’re easy to grasp, the stars/spiders encourage quite user-led work. The scoring is normally done with the individual and it opens a discussion about what they really want out of the intervention / did they get it. 

3.  Miscellaneous other ideas

Peer-led work

NCVO Knowhow has a great page called Participatory methods. This is full of ideas as to how to involve service users more deeply in feeding back on the project. 

For Under-18s, Participation Works has some great ideas.

Randomised Controlled Trials(!)

Be aware that the standards of measurement have gone up – in overseas development, at least, where it’s been applied to everything from  teacher attendance to textbook provision, monitoring of nurse attendance and the provision of microcredit. An RCT – where people are randomly assigned to the project or not and compared – might seem a bit extreme, but it’s sometimes worth considering. 

The traditional objection to RCTs is an ethical one, but I don’t see how the situation is any different from the medical one, where RCTs are the norm – at least in 1:1 settings, such as where if people are getting individual casework. If your work is happening in many locations, then you can randomly choose some to get the new approach, for example and that at least introduces some elements of the RCT thinking. (In comparison, most projects will set things up so that the pilots work in the very best way that they possibly could, with the best staff, most suitable environments, most enthusiastic managers, etc – which makes a lot of sense in many ways, but purely from a measurement perspective, there’s less rigour.)

Common monitoring systems

A very smart route that some charities have gone down is having a core set of impacts that are monitored across different services. This fits the idea of an organisational theory of change and enables you to see the contribution of each service to the whole and the impact of doing things in different orders and combinations. (In principle it might make writing individual project impacts a bit harder to do. However, I’m at a charity with a framework of overlapping monitoring systems and it’s just a bit of a challenge rather than a big issue.)

Risk

Technically, risk is something to be monitored. Normally that sits in the risk answer, not the Monitoring section. I’m distinguishing this from very significant issues around project implementation – for example, cost control on capital projects or numbers of volunteer groups successfully set up, which I suggested you include under Monitoring and Evaluation.

Theory-based evaluation

Another area of M&E that’s more sophisticated in overseas development, but that you might consider if you have the enthusiasm and unlimited writing space…

People generally monitor for results. Theory-based evaluation looks at whether the positive results actually accord with the theory of change. So, you might be asking people why they got those results, for example.

That’s a more powerful idea than it might sound to a beginner. A criticism of a lot of interventions is that people might be better off at the end, but people are strong, resourceful and could have found their own way to a positive result without your work. Your intervention was just kind of there – a good thing, but what difference did it make? Theory-based evaluation addresses that concern.

Understanding why something happened also helps you to be a learning organisation.

Evaluation cycle

A great answer on evaluation also sites M&E in the context of the actions on a project, so that it’s meaningful, rather than just data sat in a report somewhere. Writing in that way, I’d say something about the processes, so that it becomes clearly what we do.

So for example, you might say that changes identified will be implemented in the months after and will be a particular focus of monitoring in the ensuing period.

A control

If you can compare what you’re doing with what happens without the intervention, you’ll be getting better measures (and everyone will know that). Two ways to do that are:

  • Measuring the experiences of people going through the existing service that you’re adding to, before you introduce the change.
  • If you operate in many locations, measuring the experiences of people at a very similar location where you operate, where there’s no similar service.

If you do this – don’t over-commit to improvements. People are clever and strong and may find their way without your service.

GDPR

A complete answer should say something about GDPR, as data raises privacy concerns. (However, it says nothing about project quality/impacts, so I’d personally keep to a half-line comment, unless you have unlimited space.)

Writing an evaluation report

The NCVO Knowhow webpage How to write an evaluation report is a good starting point if you’re writing an evaluation. It will need careful modification to fit the needs of your funder.

Resources

Data with destiny: How to turn your charity’s data into meaningful action by Inspiring Impact, is a good practical guide If you’re actually setting up a monitoring system

If you are interested in RCTs, J-Pal’s site is huge but has a “how to” section.

NCVO KnowHow Evaluating the impact of your campaign