Evidence based decisions - Evidence, impact and evaluation At a glance

We’ve evaluated Evidence Based Decisions to see how effective it is in managing complex neglect situations, and if it helps prevent neglect happening again. This is the first time it has been fully tested in the UK.

How neglect affects children

Neglect is the most common form of child abuse and the most common reason for taking child protection action in the UK. It can have long-lasting effects including mental health problems; poor physical, emotional and social development; behavioural problems; and low self-esteem.

It can be challenging for professionals to identify, measure and monitor neglect (Gardner, 2008). This can make it hard for decisions to be made about a child's future care.

How Evidence based decisions is helping protect children

Children in families with complex neglect issues have better outcomes when professionals make the right decisions quickly about how to support them. However research shows that most cases are not being managed in a consistent way. This can lead to children suffering repeated neglect, despite ongoing child protection work (Farmer and Lutman, 2010). Cases can drift while social workers consider the best course of action, and neglected children often remain at home without proper support.

The North Carolina Family Assessment Scale (NCFAS) tool was developed in the USA by Dr Ray Kirk at the University of North Carolina-Chapel-Hill. It is provided by the National Family Preservation Network. It has been successfully used in the USA, Canada and Australia.

How we're evaluating this service

We've evaluated a form of the North Carolina Assessment Scale that deals with 8 areas particularly relevant to neglect. We are hoping to understand how effective the tool is in managing neglect cases and preventing neglect from re-occuring.

There are 3 components to the evaluation of Evidence Based Decisions:


We've analysed survey data from practitioners and social workers, who completed Evidence Based Decisions reviews with family members. The survey related to the utility and influence of each review completed.

Analysis of scores

We've analysed shifts in the scores attributed to families, comparing the score a family gets for the first review with the score they get for their second review 3 months later.


We have conducted a set of interviews with practitioners and social work staff to understand:

  • the variation in which professionals implemented the review
  • the conditions in which the review played a part in helping improve evidence, understanding and decision-making.

The evaluation didn't aim to quantify the ways in which the review's scale tool was used. Nor does it aim to establish the impact of the review or the validity or reliability of the scale tool used in the review.

We faced a number of challenges, including the creation of administrative systems and technology to collate the families' scores that families and for practitioners' to complete surveys of each review.

Reaching, talking to and establishing interviews with social workers in local authorities has also been difficult at times. We didn't always have the contact details of local authority staff or know who the relevant colleague would be. When we called they were often away from their desk or unavailable.

To overcome this, we looked at ways of using technology to make data collection and collation as efficient as possible. One solution has been to use SNAP surveys, which allows practitioners and social workers to access and complete the survey online, so they can do it wherever they are based in the country. Furthermore, rooting technology means that they only get asked the questions which are relevant, which gives SNAP surveys an edge over paper based surveys.

We've also needed to energise and motivate practitioners to do the extra administrative work that is required in collating data, anonymising data, filling in online surveys and encouraging their social work equivalents to do the same.

Initially some practitioners felt reluctant to participate. This appeared to be in part because they weren't used to having their practice analysed by someone other than their team manager. However it also seemed that some, who had been asked to work on the service when they had previously worked on something quite different, used the interview process to communicate a general sense of unhappiness with their role on the new commission.

We used 3 key techniques to increase the motivation and commitment of staff towards completing evaluative tasks:

  • Demonstrating persistence, determination and a willingness to invest time in the evaluation
    For example, responding to practitioners and managers by phone as quickly as possible whenever a problem or query is raised.
  • Maximising the respect that managers and practitioners feel they are being paid during the evaluation
    This involves committing to talk to each member of staff involved in the commission, on a one-to-one basis, wherever possible, usually by phone. Communication with staff involves clear instruction on what is required, a commitment to encourage and listen to any complaints or suggestions for improvements, and a commitment to responding to any such complaints or suggestions.
  • Communicating the extent to which others have signed up and committed to the evaluation
    We've found that motivation for participation in an evaluation can become contagious if people see that their colleagues are already participating. Therefore regular group emails and summaries of progress in key meetings helps. It's also important to thank people on a regular basis, individually and as groups, for their effort, and to remind them that the achievements of the evaluation belong to them. A central part of this strategy of engendering a sense of group commitment is to get individual members of the CDG on board as soon as possible, so that practitioners see Senior Service Managers and staff championing the evaluation independently of the evaluator.

This evaluation was carried out internally by the NSPCC evaluation department. It used the following tools:

  • interviews
  • online survey
  • North Carolina Family Assessment Scale – General.

Find out more about the tools used to measure outcomes

Contact Mike Williams for more information.

What we've learned

Social workers felt the Evidence Based Decisions review helped them make the right decisions for families.

Some social workers said the North Carolina Family Assessment Scale provided more concrete evidence than assessment tools they commonly used, such as the Common Assessment Framework (CAF) triangle. 

Read the evaluation report.

What we're doing next

We'll be using the North Carolina Family Assessment Scale and learning from our evaluation in the new NSPCC service Thriving Families

Impact and evidence

Find out how we evaluate and research the impact we’re making in protecting children, get tips and tools for researchers and access resources.

Our impact and evidence

Donate now

Last year a third of all calls to our helpline were about neglect, a figure that's even higher at Christmas. Donate now and help shine a light on children left in the dark.

Donate now

Evidence Based Decisions

Evidence, impact and evaluation of our service

Find out more

Graded Care Profile 2

Evidence, impact and evaluation of our service

Find out more


Evidence, impact and evaluation of our service

Find out more

Improving parenting, improving practice

Evidence, impact and evaluation of our service

Find out more


  1. Farmer, E. and Lutman, E. (2010) Case management and outcomes for neglected children returned to their parents: a five year follow-up study. [London]: Department for Children, Schools and Families (DCSF).

  2. Gardner, R. (2008) Developing an effective response to neglect and emotional harm to children (PDF). London: University of East Anglia and NSPCC.