8 ideas for managing biased baseline data

In his second post on biased baseline data, Mike Williams shares his ideas for reducing bias

As seen in my first post last week, collecting reliable baseline data from service users can be problematic. Here I explore 8 ways to minimise bias in data gathered from service users by practitioners using standardised measures or inventories.


1. Honest communication

If a service user’s scores appear unrealistic, consider asking them directly if they would like to reconsider their rating.

In Protect and Respect, our service for young people who've been sexually exploited, some practitioners asked service users to reconsider their scores if large discrepancies were revealed between children’s reports and scores provided.

Directly asking people to reconsider their score is not recommended by some outcome measure tools. It can introduce a new bias from practitioners influencing a service users’ rating.

If you think you may need to readjust scores you could consider using a tool like the Outcome Rating Scale (ORS)  which allows for readjustment during the session.

2. Multiple perspectives

Getting multiple perspectives can help to identify possible biases and inaccuracies in data sets.

In our evaluation of SafeCare, a programme focused on children at risk of neglect, we combined feedback from practitioners who had sight of service users’ scores with scores from other measures to demonstrate that parenting ratings were artificially high.

3. Rooting out inconsistency

The Parent Child Relationship Inventory has an “inconsistency subscale” that compares a number of paired questions to indentify discrepancies. We used this in our evaluation of a service for neglect. It allows forms with inconsistent responses to be excluded from the analysis and evaluation of a service.

4. Identifying dishonestly

In our evaluation of Turn the Page, a service for young people displaying harmful sexual behaviour, we used the Personal Reaction Inventory to identify service users who were unlikely to answer study measures accurately. Up to one third of all children tested were identified as answering questions with the aim of getting a different score.

However, introducing a deception scale is easier said than done.

We had originally planned to use the Paulhus Deception Scale in one evaluation until our practitioners raised concerns that using a tool to explicitly test the authenticity of service users’ responses might undermined attempts to engage sceptical service users..

5. Cognitive testing

Commonly used in survey design, cognitive testing involves going through the measure with the service user, with them telling you what they’re thinking as they respond to questions.

This helps identify trends in interpreting questions that lead to biased responses.

6. New processes

In the evaluation of SafeCare, we suspected that parents returning forms to the practitioner might alter their answers, knowing the practitioner would see them.

We adapted the process so completed forms were handed back in a sealed envelope.

7. Reflection

Service users who've been engaged in the service for a long period of time could be asked to reflect on whether they felt they have filled in their baseline forms honestly when they started.

Even if this isn’t done for all service users, it’s a useful method to deploy for service users whose scores worsen, testing to see if the scores indicate a deterioration in outcomes or increased openness about their problems.

8. Engagement

A more radical approach is to take a baseline measure only at the point that genuine engagement has been achieved.

Our evaluation of Turn the Page suggested the situation for some children worsened after the intervention started. However, what appeared to be worsening conditions in the data were actually indicative of a moment of engagement, when young people strted to give more open and honest responses to the measure.

In such a case the work done with the child prior to them opening up could be considered “engagement”, with the “intervention” starting from the point they open up. 

Conclusions

Whatever method you use, if biased or inaccurate data is identified you will face the dilemma of whether to exclude it from your analysis.

Colleagues working on the evaluation of Turn the Page agonised over their decision to remove biased data because removing the data meant they couldn’t test for statistical significance without extending the programme for another 6 months.

However, in our SafeCare report we decided to include overly positive data from an adapted version of the Mother Child Neglect Scale. We couldn’t prove that the data was inaccurate and other researchers could benefit from our experience of using the tool.

When publishing your findings, it is important to always include an explicit discussion about problems encountered in research.

Reports on SafeCare and Letting the Future In are good examples of the NSPCC being explicit about the limitations of data.

While researchers often mention the limitations of the data in the methodology section, it's less common to draw out the significance of those weaknesses when findings are presented and referenced. Research data is often presented without the reservations flagged up in the methodology.

When measure data is used in research reports it would also be useful for authors to point out that the measures are based on users’ perceptions of outcomes and that perceptions are not always accurate.

Exploring the limitations of our work isn’t easy. People expect clear-cut answers and qualifying findings with caveats can be taken as a sign of incompetence.

It may be a difficult process, but the accuracy of our studies depends upon honesty.

This post was written with input from NSPCC experts:
Gill Churchill, Emma Belton, Nicola McConnell, Vicki Jackson, Paul Whalley and Richard Cotmore.

Like this blog?

Let us know which blog you've read, what you think, share information you have on the topic or seek advice. 

Get in touch

Impact and evidence insights

Each week we’ll be posting insights from professionals about evaluation methods, issues and experiences in child abuse services and prevention. 
Read our blogs

Impact and evidence

Find out how we evaluate and research the impact we’re making in protecting children, get tips and tools for researchers and access resources.
Impact and evidence

Understanding biased baseline data

In the first of a two-part post, Mike Williams discusses the NSPCC’s experience of biased baseline data.
Read more

Services for children, families and professionals

We work face to face with children, young people and families who need our help across the UK.
View our services