Improving services using negative findings

Dr Alice Haynes discusses the implications and importance of identifying a service that isn't working

boy and mother readingThe social care sector's move to evaluating the effectiveness of services for children and families is an exciting step forward in tackling the problems many families in Britain face.

Evidence moves practice from what we think works to what we know works.

We put our hearts and souls into developing services for children and families: we believe they will be effective. However, as we test services, there's a risk that we'll find they don't achieve the outcomes we wanted them to.

No one wants to deliver a service that doesn't help families, but there are risks when admitting that a service doesn't work, in particular loss of commissioning or funding for the service and potentially other initiatives from the implementing organisation, and in terms of reputation.

Negative evaluation findings are part and parcel of the process. Inevitably we'll find some services we design, or import from other countries, don't work – that's the nature of innovation.

I want to highlight that while negative evaluation findings may be disappointing, they are essential in moving forward.

Identifying services that aren't working

We need to know what services work, with which families and why, and whether change can be sustained. We're not in the business of blindly "prescribing treatment" that does nothing to help families, or may cause them harm.

We may find an intervention is effective in producing some expected outcomes, but not others, or it might not make any difference to the outcomes we're measuring.

No single evaluation provides a definitive answer about whether a service should be commissioned. The context in which a service is delivered, the evaluation design and the outcomes focused on, all determine the results of the evaluation.

Weaknesses in evaluation design can mean something crucial is missed: something clear to practitioners, but not to evaluators.

What happens when an evaluation shows a service isn't working?

Services can be decommissioned and defunded if they are shown not to work.

As a social care community, our evaluation journey is in its infancy. Also, many rigorous evaluations, such as randomised controlled trials (RCTs), take several years to complete, so there's a wait for findings.

Negative evaluation findings can be particularly difficult for the practitioners delivering the service and believing it to be making a difference.

Practitioners are the people on the frontline and they can feel a service is having the desired outcomes, even when it isn't. They may see improvements in people's lives, but those improvements might not be sustained or don't translate into concrete outcomes.

Family Nurse Partnership case study

Magnifying glass

Evaluation findings were under the spotlight after the publication of findings from the English evaluation of the Family Nurse Partnership (FNP).

The FNP is a voluntary home visiting programme for first time mums aged 19 and under. It aims to enable young mums to have a healthy pregnancy, improve their child's health and development, and plan their future.

The programme was brought over from the US with one of the strongest evidence bases of any early childhood preventive programme from over 35 years of extensive research, including three large-scale RCTs.

The FNP was rolled out in England, but the results were less positive than in the US.

The trial showed some positive effects on early child development; it showed that FNP may prevent children slipping through the net by identifying safeguarding risks early; and young mothers engaged well with FNP and valued the close and trusting relationship with family nurses.

However, its effect on the main short-term outcome measures was disappointing. FNP didn't help mothers stop smoking in pregnancy, or lower the rates of subsequent pregnancy within two years. Evaluators concluded there was no justification for continuing to commission the programme at present (but this could be reconsidered in the light of any new evidence).

These results were a blow for the FNP and wider community. However, the FNP has risen to the challenge, showing great leadership through a reflective, open and collaborative response to the findings, looking for ways to improve the programme and share learning.

Our view on evaluation findings

The NSPCC aim to be transparent with evaluation findings. We're in the privileged position of testing interventions, so our partners in local commissioning and government can put the best possible services for children and families into place.

As for other service developers and providers, we can - and should - learn from each other, so we don't make the same mistakes.

Evidence must be at the heart of our work with children and families, but the purpose of evaluating a service is to enlighten and improve interventions, rather than a single assessment of their value.

There are risks to conducting evaluations, but we mustn't shy away from them. Commissioners and funders must enable us to be bold and innovative. They have a responsibility to respond constructively to evaluation findings - good or bad - encouraging us to innovate and test.

Like this blog?

Let us know which blog you've read, what you think, share information you have on the topic or seek advice. 

Get in touch

More information

Impact and evidence insights

Each week we'll be posting insights from professionals about evaluation methods, issues and experiences in child abuse services and prevention.

Read our blogs

Impact and evidence

Find out how we evaluate and research the impact we’re making in protecting children, get tips and tools for researchers and access resources.
Impact and evidence