Evidence Based Decisions Evidence, impact and evaluation

We’ve evaluated Evidence Based Decisions to see how effective it is in managing complex neglect situations, and if it helps prevent neglect happening again. This is the first time it has been fully tested in the UK.

How neglect affects children

Neglect is the most common form of maltreatment suffered by children in the UK (NSPCC, 2014). Neglect can put a child at immediate risk of illness, injuries, disability and even death. It can also have long-lasting effects including mental health problems; poor physical, emotional and social development; behavioural problems; and low self-esteem.

Yet social workers are often unsure about how best to intervene. There is often a lack of timely intervention to protect children at risk. Cases can be left to drift when no-one steps back to look at the overall picture of the child's life.

Read more about child neglect.

How Evidence based decision is helping protect children

Children in families with complex neglect issues have better outcomes when professionals make the right decisions quickly about how to support them. However research shows that most cases are not being managed in a consistent way. This can lead to children suffering repeated neglect, despite ongoing child protection work (Farmer and Lutman, 2010). Cases can drift while social workers consider the best course of action, and neglected children often remain at home without proper support.

The North Carolina Family Assessment Scale (NCFAS) tool was developed in the USA by Dr Ray Kirk at the University of North Carolina-Chapel-Hill. It is provided by the National Family Preservation Network. It has been successfully used in the USA, Canada and Australia.

How we're evaluating this service

We've evaluated a form of the North Carolina Assessment Scale that deals with 8 areas particularly relevant to neglect. We are hoping to understand how effective the tool is in managing neglect cases and preventing neglect from re-occuring.

There are 3 components to the evaluation of Evidence Based Decisions:

Surveys

We've analysed survey data from practitioners and social workers, who completed Evidence Based Decisions reviews with family members. The survey related to the utility and influence of each review completed.

Analysis of scores

We've analysed shifts in the scores attributed to families, comparing the score a family gets for the first review with the score they get for their second review 3 months later.

Interviews

We have conducted a set of interviews with practitioners and social work staff to understand:

  • the variation in which professionals implemented the review
  • the conditions in which the review played a part in helping improve evidence, understanding and decision-making.

The evaluation didn't aim to quantify the ways in which the review's scale tool was used. Nor does it aim to establish the impact of the review or the validity or reliability of the scale tool used in the review.

We faced a number of challenges, including the creation of administrative systems and technology to collate the families' scores that families and for practitioners' to complete surveys of each review.

Reaching, talking to and establishing interviews with social workers in local authorities has also been difficult at times. We didn't always have the contact details of local authority staff or know who the relevant colleague would be. When we called they were often away from their desk or unavailable.

To overcome this, we looked at ways of using technology to make data collection and collation as efficient as possible. One solution has been to use SNAP surveys, which allows practitioners and social workers to access and complete the survey online, so they can do it wherever they are based in the country. Furthermore, rooting technology means that they only get asked the questions which are relevant, which gives SNAP surveys an edge over paper based surveys.

We've also needed to energise and motivate practitioners to do the extra administrative work that is required in collating data, anonymising data, filling in online surveys and encouraging their social work equivalents to do the same.

Initially some practitioners felt reluctant to participate. This appeared to be in part because they weren't used to having their practice analysed by someone other than their team manager. However it also seemed that some, who had been asked to work on the service when they had previously worked on something quite different, used the interview process to communicate a general sense of unhappiness with their role on the new commission.

We used 3 key techniques to increase the motivation and commitment of staff towards completing evaluative tasks:

  • Demonstrating persistence, determination and a willingness to invest time in the evaluation
    For example, responding to practitioners and managers by phone as quickly as possible whenever a problem or query is raised.
  • Maximising the respect that managers and practitioners feel they are being paid during the evaluation
    This involves committing to talk to each member of staff involved in the commission, on a one-to-one basis, wherever possible, usually by phone. Communication with staff involves clear instruction on what is required, a commitment to encourage and listen to any complaints or suggestions for improvements, and a commitment to responding to any such complaints or suggestions.
  • Communicating the extent to which others have signed up and committed to the evaluation
    We've found that motivation for participation in an evaluation can become contagious if people see that their colleagues are already participating. Therefore regular group emails and summaries of progress in key meetings helps. It's also important to thank people on a regular basis, individually and as groups, for their effort, and to remind them that the achievements of the evaluation belong to them. A central part of this strategy of engendering a sense of group commitment is to get individual members of the CDG on board as soon as possible, so that practitioners see Senior Service Managers and staff championing the evaluation independently of the evaluator.

This evaluation was carried out internally by the NSPCC evaluation department. It used the following tools:

  • interviews
  • online survey
  • North Carolina Family Assessment Scale – General.

Find out more about the tools used to measure outcomes

Contact Mike Williams for more information.

What we've learned

Social workers felt the Evidence Based Decisions review helped them make the right decisions for families.

Some social workers said the North Carolina Family Assessment Scale provided more concrete evidence than assessment tools they commonly used, such as the Common Assessment Framework (CAF) triangle. 

Read the evaluation report.

What we're doing next

We'll be using the North Carolina Family Assessment Scale and learning from our evaluation in the new NSPCC service Thriving Families

Impact and evidence hub

Find out how we evaluate and research the impact we’re making in protecting children, get tips and tools for researchers and access resources.

Our impact and evidence

Support our services

With your support, we help keep one million children safe from abuse and neglect – so they can look forward to a life of hope, and endless possibilities.

Donate now

Evidence Based Decisions

Evidence, impact and evaluation of our service

Find out more

Graded Care Profile

Evidence, impact and evaluation of our service

Find out more

SafeCare®

Evidence, impact and evaluation of our service

Find out more

Improving parenting, improving practice

Evidence, impact and evaluation of our service

Find out more

References

  1. Farmer, E. and Lutman, E. (2010) Case management and outcomes for neglected children returned to their parents: a five year follow-up study. [London]: Department for Children, Schools and Families (DCSF).