Choosing the right evidence level for evaluation

Charities trying to measure their impact must choose the right evidence level for their organisation, says Anne Kazimirski

Many organisations come to my colleagues and I with the question: "How do I get to Level 3?"

Level 3 is the new pot of gold at the end of the rainbow. It's the third of 5 levels to define the robustness of evidence for the impact of social interventions. 

Variations of this level are used by organisations including Project Oracle, Dartington Social Research Unit and Nesta.

Level 3 involves demonstrating causality using a control or comparison group. In other words, using some sort of comparison with a similar group of people who haven't experienced the service or help you provide. This can show that you've achieved a positive change for you beneficiaries. 

I think many charities and social enterprises can aim for this.

Good evidence within reach

We worked with a tine Nottingham-based charity called Imara, which supports victims of child sexual abuse and their families, on the evaluation of its service.

This involved looking at relevant police data, which showed cases where Imara was involved were more likely to get to court. This suggests that the support Imara provides helps children through the trauma of giving evidence, thus strengthening the case. 

Police staff also reported that by translating the legal process for the family, Imara saved them time, so they could concentrate on getting the case to court.

The right level of evidence for your organisation

Some of the organisations asking us about level 3 are getting ahead of themselves.

A common issue is that charities attempt it too soon in their organisational journey; it’s unlikely that such an evaluation can be carried out when a charity is barely two months old.

This means the “magic” evidence level is sometimes being applied out of context (something charities, funders and commissioners are all guilty of), because it doesn’t apply to every stage of organisational development.

And the important initial question – “will achieving level 3 be useful to me and my organisation?” – can also be missed.

As a result, more achievable and useful evaluation options end up being overlooked.

Your research questions

One of the first tasks on your evaluation journey is working out what your research question should be. Whatever data is collected needs to match this question. 

This question changes as an organisation grows and develops - and as the needs of your beneficiaries, stakeholders and funders change over time. 

"Are my services reaching the right people?"

With a brand new organisation, the research question might be "are my services reaching the right people?" In this case, qualitative feedback from staff and service users is likely to be most useful.

Collecting quantitative data on outcomes for service users on a brand new intervention is unlikely to be worth doing until you've dealt with any teething issues (although baseline data may be useful if you won't have any "new" service users to conduct research with further down the line).

Potential for learning

I always encourage organisations to focus on what they plan to do with the data that’s collected and how they will learn from it.

Even if level 3 data shows evidence of success, it needs to show something about why it was successful to help inform future work.

Perhaps we need learning standards to complement evidence levels, assessing the extent to which an organisation is making use of the data it collects. In any case, charities must ensure they build learning opportunities into the process of collecting and analysing evidence of their impact.

So, before getting carried away with level 3 plans, organisations should make wise use of precious resources, maximise the potential to learn and use an evidence standard appropriate to the organisation and its development.

Take it one step at a time – and make sure you can walk before you try to run.

Like this blog?

Let us know which blog you've read, what you think, share information you have on the topic or seek advice. 

Get in touch

More information

Impact and evidence insights

Each week we'll be posting insights from professionals about evaluation methods, issues and experiences in child abuse services and prevention.

Read our blogs

Impact and evidence

Find out how we evaluate and research the impact we’re making in protecting children, get tips and tools for researchers and access resources.
Impact and evidence

Tools for measuring outcomes

We want to share our experiences of using standardised measures in our evaluations so that we can help others who are looking at evaluation methods.

Learn more