Evaluation in social policy

Leon Feinstein from the Early Intervention Foundation discusses randomised controlled trials, social policy and the future of evaluation

The Early Intervention Foundation (EIF) is an independent charity and one of the government's What Works Centres.

It reviews evidence on early intervention to provide advice and support for agencies, councils, government, communities and others seeking to tackle the root causes of social problems for children and young people. 

In this exclusive post, EIF director of evidence, Leon Feinstein, discusses the business of rating interventions and how to work with organisations and sectors that find evaluation challenging.


Rating interventions based on the level of impact evidence

The Early Intervention Foundation (EIF) strongly resists the idea that the only way ever to evaluate impact or conduct research is through a randomised controlled trial (RCT).

We do want to see more RCTs and what are called quasi-experimental designs (or QEDs) which take advantage of naturally occurring ways of demonstrating impact. I’m not sure if people know that the government’s Magenta Book on evaluation puts QEDs up there with RCTs as being capable of demonstrating an effective programme or practice.

But evidence on what works is always retrospective. It tells us what has happened in a particular time and place. The evidence can inform decision-making, but it doesn’t provide a pro forma answer to every question.

Our framework for assessing different standards of evidence is a scale that reflects stages of a programme’s development. This includes the transition from having no evidence at all, to specifying what a programme will do and how, and careful initial measurement of outcomes. 

The most effective developers allow early testing and initial qualitative sample analysis, before they go down the road of a large sample RCT.

Policy changes to encourage different types of evidence

One problem with the policy landscape is everyone demanding better evaluation with insufficient resource to do it.

Our guidebook describes standards of evidence, but we don’t have the resources to support developers and other VCS providers to improve evidence. We hope that will change. There does need to be support for evaluation, otherwise people won’t invest enough in evidence to learn about what works.

There is demand from providers and developers for help and advice, but many organisations and staff who want help for evaluation don’t receive it.

Of course, politics is a barrier sometimes, so is capability.

Charities are a big part of the practice and shaping of social policy: they have a big role to share and promote best practice and what works more broadly.

There is no shortage of demand for good evidence, but too often commissioners and policy makers want the evaluation from which they draw to be done elsewhere.

If there was more measurement for organisations and staff to learn about impact and effectiveness, and less high stakes accountability, there would be better evaluation and less burdensome data collection. I hope we can prove that over the next few years.

Working with organisations that think evaluation is too challenging

More organisations need to be supported through the process of thinking about impact. They shouldn’t be made to feel that if there is no RCT of their activity, or themselves as an organisation or individual, there is no value to what they do or no opportunity for improving effectiveness or impact.

We’ll never be in a world where every practitioner will do an RCT on their own practice - it wouldn’t be possible or sensible. There will always be a question of fitness for purpose and scale. I wish this was better recognised.

Within the government’s What Works Network there’s a commitment to RCTs - the approach doesn’t mean that every social worker should do an RCT of their practice, but that the learning from high quality impact evaluation should inform thinking about impact.

I tend to think if you care about creating social value that means thinking about and measuring impact in some way, regardless of the wider politics. Value for money should always be important.

Next steps for the Early Intervention Foundation

As an organisation, we intend to focus more clearly on targeted activity where there are already emerging signals of risk, and multi agency responses to improve outcomes and deliver savings.

Our library only has 50 programmes, a partial slice of the available activity. Over the next year, we will see that grow and change considerably: this will include better advice about how to improve evaluation. We have been reviewing the evidence of impact of over 200 programmes over the last two years and in the next few months will start reporting on what we have learnt.

But a commissioner should never just go into the guidebook and pick the highest rated programme without regard to local circumstances and concerns. We intend to relaunch an upgraded guidebook in 2016 to make much clearer what that means and how we - and others - can help.

Like this blog?

Let us know which blog you've read, what you think, share information you have on the topic or seek advice. 

Get in touch

More information

Impact and evidence insights

Each week we'll be posting insights from professionals about evaluation methods, issues and experiences in child abuse services and prevention.

Read our blogs

Impact and evidence hub

Find out how we evaluate and research the impact we’re making in protecting children, get tips and tools for researchers and access resources.
Our impact and evidence

Research with children: ethics, safety and avoiding harm

What to consider when conducting research involving children, including research methods and key principles. 
Read more