What are the best tools to measure outcomes for children and families?

Rachel Margolis, Emma Smith and Mike Williams offer advice on measuring the impact of services that support children and families

Boy playing with castle and toysWe evaluate our services so we can find out what works – and what doesn’t – in supporting children and families. A key component of this is identifying appropriate tools for measuring change – we want to know what impact our service has had on the people we’re trying to help.

There can often be differences between the way a tool is used in theory and how it works in practice. For evaluators, it is also important to assess how these tools can be used appropriately. We need to ensure the wellbeing of the people participating in the research and make sure our study gets the greatest level of engagement possible.

In this blog, we’ll be sharing our tips on how to evaluate services in practice, along with some of the tools we have found most useful to measure outcomes for children and families.

1. Choose your tools wisely

Rachel Margolis: "Be clear what you want to measure – and make sure you’re using the right tool to gather the information you need. It’s useful to have discussions with colleagues who are delivering services to pin down what we’re trying to achieve, the changes we’re expecting to see and exactly what we want to measure so we can say whether the approach has been effective."

2. Some tools make for easier analysis

Rachel Margolis: "Tools that are widely used and validated mean you can be clearer about analysing the data you collect. For example, scores from the Strengths and Difficulties Questionnaire (SDQ), which measures emotional and behavioural problems, are categorised in 4 bands. This means that we can see not only if a child’s scores improve between the beginning and end of a programme but also if children move from a higher level of need to a lower level. This is useful for describing changes that are not just statistically significant but also clinically significant."

Emma Smith: "We evaluated our Domestic Abuse, Recovering Together (DART) service, which supports mothers and children who have experienced domestic abuse. We were able to compare SDQ data from DART participants to SDQ data from a therapeutic service for children at a refuge. This meant we could measure how effective DART was, compared with another service."

3. Help parents and children understand

Emma Smith: "We always explain why we’re asking service users to participate in evaluations. Seeking informed consent is at the heart of our ethical guidelines. We make it very clear what we do with people’s data and how long it is kept for; we also make sure participants know that what they tell us is kept confidential unless a child protection concern emerges from something they say. It’s important to us that service users are in control. They can decide whether or not to take part in the research and if they do participate they can decide to withdraw their data up until the time the study is published. This hasn’t happened so far for us but the option is always there."

4. The way you invite service users to participate makes a difference

Rachel Margolis: "When we introduced the Child Abuse Potential Inventory (CAPI) as a tool to measure the impact of 2 of our services (FEDUP, which worked with families where 1 or more parents misuse drugs or alcohol, and Family SMILES, which supported families where parents have a mental health problem) our practitioners were understandably anxious. CAPI is a tool that helps measure protective parenting. It has 160 questions and was designed in the 1970s - so the length of it can be challenging, as can some of the language it uses. Our practitioners have worked really hard to figure out how to offer it at the right time, and to explain how and why we’re using it to parents. We’ve shared the CAPI scores with some parents and it has been a useful springboard for conversations about what has emerged. In this way it can support the work we’re doing with families."

5. Be prepared

Emma Smith: "Questionnaires can ask participants to reflect on difficult aspects of family life and this can raise difficult feelings. We’ve had instances where the process of administering the evaluation tool with a child or parent has uncovered information which hadn’t yet emerged during the therapeutic work. Being prepared to respond to a disclosure or signposting help where appropriate is really important."

6. Questionnaires can raise sensitive issues – but this can open doors

Emma Smith: “If we’re trying to measure the impact a service is having on reducing abusive behaviour, for example, we have to ask some sensitive questions to get to the bottom of what has and hasn’t changed for the families taking part. When we used the parental acceptance and rejection questionnaire (PARQ) to measure how mothers who had taken part in our DART service behaved towards their children, one mother found it quite upsetting - but this was because the questionnaire had helped her recognise the negative ways she behaved towards her child. In some ways this in itself could be considered a positive outcome. It shows how tools like this can help shape the approach to supporting a family.”

7. Incorporate evaluation into the work

Mike Williams: "Sometimes we need to ask practitioners to administer a tool without looking at the results - because knowing what the tool has shown could influence how they respond to the child. However, this can cause practitioners frustration. So it’s sometimes worth finding tools that you can integrate into their work with children. Doing this is likely to increase the volume of data you get back.

I’ve found the best way to integrate a measure into practice is to look for one that was designed for practice purposes. The Outcome Rating Scale (ORS) is one such tool. We’re using it to evaluate our Protect and Respect service for young people who have been, or are at risk of being, sexually exploited. It uses a simple scale to empower young people to talk about the challenges they face in different areas of their lives, but also helps them discuss solutions. In some cases, using this tool has prompted a young person to disclose abuse for the first time."

More information about our work

Read about the tools we use to measure outcomes for children and families, along with our ethical guidelines and guidance about how to deal with a disclosure when conducting research.

Like this blog?

Let us know which blog you've read, what you think, share information you have on the topic or seek advice. 

Get in touch

More from impact and evidence

How do our services benefit individuals?

Emma Smith discusses why service evaluations should include analysis of both statistical and clinical significance.
Find out more

How we evaluate our services

We choose the best approach for the question we're trying to answer - whether we're learning about something very innovative or something that's well developed.
Read more

Tools for measuring outcomes for children and families

Our experiences of using standardised measures in our evaluations
Read more

Impact and evidence insights

Each week we’ll be posting insights from professionals about evaluation methods, issues and experiences in child abuse services and prevention. 
Read our blogs

Resources and advice for professionals

Impact and evidence

Find out how we evaluate and research the impact we’re making in protecting children, get tips and tools for researchers and access resources.
Impact and evidence


Access the largest collection of child protection resources in the UK.

Read more


  1. Duncan, B.L. et al (2011) ORS, SRS, CORS, CSRS, YCORS/SRS, and GSRS/CGSRS (DOC). [Accessed 15/09/2017] 

  2. Milner, J.S. (1986) The child abuse potential inventory manual. 2nd edn. Dekalb, Illinois: Psytec.

  3. Rohner RP (1990) Handbook for the study of parental acceptance and rejection, 3rd edition. Rohner Research Publications: Storrs, CT.