Using standardised measures to evaluate behaviour change in young people

Emma Belton discusses the use of standardised measures to evaluate the impact of a programme for young people with harmful sexual behaviour

Teenage boy writing and smilingTraditionally, adult sex offender programmes use a wide array of standardised measures, pre and post programme, to assess change. But it can still be difficult to get enough data to analyse the effectiveness of the programme.

I’m sharing my experience of using standardised measures in a programme for young people with harmful sexual behaviour (HSB), discussing the challenges we faced when evaluating the programme and possible solutions we’ve identified for the future.


Turn the Page

I evaluated our Turn the Page manualised treatment programme, looking at how it works with young males without a learning difficulty, who are aged 12-18, and have engaged in harmful sexual behaviour (HSB).

The evaluation had qualitative and quantitative elements and included standardised measures at the start and end of the programme.

We used some core measures with all participants, but tailored, abuse-focused measures were also used if a young man had displayed HSB towards a younger child or a peer.

This part of the evaluation proved challenging to implement and evaluation data was lost at each stage of the programme.

It took over 4 years to get enough cases for analysis – and it’s helpful to look at 4 major factors that explain this.

1. No consent

Over a quarter (28%) of young people who started the programme didn’t consent to taking part in the evaluation. We know nothing about what changed for them.

Practitioners told us that the start of the programme could be difficult for young people and their parents/carers. Some felt there was too much going on and didn’t want to get involved in additional things like evaluation. They were also concerned about the nature of questions and who would see their questionnaire responses.

In teams where the most people chose to participate in the evaluation, practitioners talked through any concerns with families, giving them time to think about taking part.

These practitioners also talked enthusiastically about the evaluation with the young people and their families, explaining it wasn’t difficult and would help to improve the service.

2. Programme attrition

Just over a quarter (27%) of young people dropped out of the programme before completion. We couldn’t collect any end of programme data from these cases.

An evaluator can’t control how many people complete the programme. But collecting data about when and why people drop out, helps the people developing and delivering the service to work out ways to reduce attrition rates.

3. Matching pre and post programme measures

Even when young people completed the programme, the end of programme measures were not always available for analysis.

Sometimes, the young person didn’t repeat the measures, not wanting to fill them out again, or practitioners forgot to re-administer them.

In other cases, measures were completed but couldn’t all be used. This was because a different abuse-focused set of measures was used at the beginning and end of the programme. Having tailored measures proved to be complex to implement: practitioners often forgot which measures they used at the start of the programme, sometimes 12 months before.

To solve this problem, it might be better to use the same set of measures with all cases, or for the tailored measures to be administered by researchers.

Ideally, the use of standardised measures would be embedded in the service: this would enable practitioners to use them to inform their practice, and it would also hopefully lead to a better match of pre- and post- programme data.

4. Reliability of responses

We used 2 measures to check how reliably young people responded to the questionnaires.

One checked whether they were responding in a socially desirable way. The other checked how open they were being about their sexual drives and interests.

Around 20-30% of young people were not responding reliably to the questionnaires and this didn’t really improve by the end of the programme. These cases were therefore excluded from the analysis as they may have skewed the results.

Given the nature of this programme, and the topics young people are asked to talk about, it’s not surprising some found being open and honest difficult: similar results have been found in other programmes.

It’s important to reassure young people they won’t be judged on their responses, nor will they be shared with anyone outside the evaluation team.

Counting the cost of lost data

These 4 factors accounted for the loss of data on almost 100 cases. 160 young men started the programme but we had 64 sets of end of programme measures available for analysis.

We had enough data to look at changes in the boys’ behaviour between the start and end of the programme, but not a big enough sample size to look at which young people made the most change.

The programme ran across 12 sites, but it took us longer than anticipated to get enough cases to make the evaluation viable.

These factors must be built into planning for future quantitative evaluations. It would have been useful to run an implementation evaluation first to establish the feasibility of using standardised measures.

Alternative evaluation options

For programmes running on a smaller scale, or without the timescale to generate enough quantitative data, an alternative option would be to use a qualitative case study design. This could capture a range of different perspectives on the progress made by each child and the reasons for this.

Turn the Page’s manualised programme used quantitative and qualitative approaches to give a more rounded view of the programme. Look at the evaluation of Turn the Page for an example of this approach.

Have you used standardised measures?

Let us know your experiences of using standardised tools to measure outcomes in children or families or ask our advice on evaluating the impact of your programme.

Get in touch

More from impact and evidence

Turn the page: final evaluation

Final evaluation of a service working with teenage boys who display harmful sexual behaviour. Part of the NSPCC’s Impact and evidence series.
Find out more

Tools for measuring outcomes for children and families

Our experiences of using standardised measures in our evaluations
Read more

Turn the page: first evaluation

Our first evaluation report of Turn the page, our service for young people with harmful sexual behaviour. Part of the NSPCC's Impact and evidence series.
Find out more

Support for professionals

Sign up to CASPAR

Subscribe to CASPAR, our current awareness service for child protection practice, policy and research.
Sign up to CASPAR

Information Service

Our Information Service provides quick and easy access to the latest child protection research, policy and practice. 
Find out more about our Information Service

Sign up to New in the Library

Subscribe to receive weekly alerts on new additions to the library collection.
Sign up

Follow @NSPCCpro

Follow us on Twitter and keep up-to-date with all the latest news in child protection.

Follow @NSPCCpro on Twitter

Helping you keep children safe

Read our guide for professionals on what we do and the ways we can work with you to protect children and prevent abuse and neglect.

Read our guide (PDF)

Sharing knowledge to keep children safe

Read our guide to the NSPCC Knowledge and Information Service to find out how we can help you with child protection queries, support your research, and help you learn and develop.

Read our guide (PDF)

Library catalogue

We hold the UK's largest collection of child protection resources and the only UK database specialising in published material on child protection, child abuse and child neglect.

Search the library

Get expert training and consultancy

Grow your child protection knowledge and skills with CPD certified courses delivered by our experts nationwide and online.
Get expert training