Qualitative research - what it means for the quality of evaluations

According to Alice Yeo, standard evaluations should include high quality qualitative research

PeopleQualitative researchers work hard to defend their methodology.

I know I'm not alone in presenting qualitative findings only to be asked, "How many people said that?" or, "Was that what most people thought?".

Such comments display a fundamental misunderstanding of what qualitative research is and what it can (and can't) do.

In this post, I'm exploring qualitative research in relation to evaluations. By including qualitative methods, better outcomes can be achieved. 

Questions answered by qualitative research

The journey of evidence-based practice and policy has reached a point where standards of evidence are the currency of the day.

Randomised controlled trials (RCTs) or quasi-experimental designs (QEDs) are widely seen as the highest standard in a hierarchy of evidence. The intentions of these approaches are sound. Establishing robust evidence of what works, not doing harm and spending money wisely.

However, it's worth reflecting on some of the implications of these methods, in particular, what questions RCTs and QEDs are not able to answer that qualitative methods can. 

Qualitative research versus binary output

The theory is that an RCT can show whether a particular programme or policy works by randomly assigning some of the target population to receive an intervention, whilst others do not. 

QEDs also involve 2 groups that experience different interventions, the impact of which can be evaluated, but participants are not randomly assigned. 

By measuring the outcomes of the group that received the intervention in question, RCTs and QEDs should be able (with careful consideration of selection bias) to indicate whether an intervention has had an impact or not and, arguably, demonstrate causality. 

I remain unconvinced that evaluations producing a binary output alone are helpful enough.

Of course, we need to know what works, but we also need to know how a programme or intervention works, under what circumstances, for whom and in what way. This knowledge facilitates effective implementation and delivery in an ever-changing society. These are questions that qualitative research can answer, but it has to be done well.

Purposive sampling

With qualitative research, weaknesses typically arise in relation to sampling, data collection and analysis. 

In my applied social research, sampling is done purposively. The sample matrix seeks to ensure range and diversity of key characteristics amongst participants. This involves decision-making and clarity about the scope of research, informing the choice of who in your target population needs to be included. 

For example, if you want to explore the impact of a parenting intervention, you’ll seek to include all types of parents or carers involved, including men and women from a range of backgrounds and living in different circumstances.  

If you achieve this range, you’ll be confident that your findings demonstrate a full range of explanations for when and how that parenting intervention does (and doesn’t) work.

This purposive sampling approach means findings from qualitative research don’t have statistical significance. Participants aren’t selected to mirror the general population, and known characteristics of relevance to your research question make it irrelevant and misleading to quantify findings from a purposive sample. 

Qualitative research explains how and why things happen

Collecting data, whether in interviews, focus groups or observations, requires skill and expertise. 

Although an interview may look like a conversation, a good qualitative researcher works hard to ensure breadth of coverage of research topics, depth of data, new issues and areas of particular interest to the participant. They are also alert to how the participant is finding the interview, making sure they’re ok. 

When analysing qualitative data, the method needs to be clear and transparent. Those reading the research should be able to understand how the findings were reached from the data. This is achieved through a systematic and comprehensive approach, seeking patterns, associations and explanations across your data set. 

Qualitative findings set out themes and factors that can explain the numbers gathered elsewhere. 

So, in the case of an intervention, an RCT would show evidence of impact or not.

A qualitative element would provide in-depth explanations for how and why this came about, highlighting any challenges, barriers, enablers or lessons learned.  

High quality research

Considering the underlying principles, and asking questions of a piece of qualitative research, can help anyone who isn’t familiar with the approach to make an informed judgement about quality. The Quality in Qualitative Evaluation framework included as supplementary guidance to the Treasury’s Magenta book (Guidance for Evaluation), provides a clear concise and robust approach.

Qualitative research can tell a story but, if done well, it can do a lot more. 

Evaluations using qualitative and quantitative methods can show whether an intervention is working or not - how, why and in what ways - creating the potential for it to work effectively, for longer and at scale. 

That’s why I think the gold standard of evaluation needs to better recognise the crucial role to be played by high quality qualitative research in a multi-method approach.

Like this blog?

Let us know which blog you've read, what you think, share information you have on the topic or seek advice. 

Get in touch

Learn more

Impact and evidence insights

Each week we'll be posting insights from professionals about evaluation methods, issues and experiences in child abuse services and prevention.

Read our blogs

Impact and evidence hub

Find out how we evaluate and research the impact we’re making in protecting children, get tips and tools for researchers and access resources.
Our impact and evidence

Tools for measuring outcomes

We want to share our experiences of using standardised measures in our evaluations so that we can help others who are looking at evaluation methods.

Learn more