4 things to consider when sharing quantitative interim findings

Key questions should be asked when sharing quantitative interim evaluation findings, says Vicki Jackson

Girl drawing Evaluating a service or programme is a long process involving a number of people.

Often, there's a desire for findings and learning to be shared before the evaluation is finished. These findings are known as interim evaluation findings.

Carrying out interim analysis can aid an evaluation and help to develop a service. From a moral point of view, it can also identify any areas of concern with the programme at an early stage.

However, there can be undesirable consequences from interim analysis. This can affect the continuity of the evaluation, what people expect from it and how well regarded the final results are. 

The interim evaluation findings debate

There's a debate about the implications of sharing interim evaluation findings and the moral obligations to do so.

There are no hard and fast rules about when and whether interim evaluation findings should be shared. Each evaluation is different.

The NSPCC’s experience of doing evaluations - internally and in partnership with external organisations - has helped us understand the complexities involved and the factors to be considered.

We’ve shaped this into 4 key questions to help evaluators plan for interim evaluation analysis and reporting. 

Any interim analysis should be planned in advance. Consider its rationale, how the findings will be used and any implications for the ongoing programme and evaluation.

Sharing interim findings might encourage or improve evaluation engagement from service delivery providers and practitioners, for example, increasing its chances of success. Sometimes a funder or programme developer wants findings before the end of a lengthy and costly evaluation to plan for future funding or to shape the service.

Interim findings tend to be based on small amounts of data collected in a programme’s early stages. This early data may be unrepresentative of the target population and may not reveal significant results. Interim findings may also be susceptible to change and differ from final evaluation results.

As such, evaluators should consider the usefulness and implications of sharing these findings before the evaluation is finished.

In other cases, there may be a more practice-based rationale for carrying out interim analysis and reporting findings. It can help to identify difficulties, limitations or gaps in service delivery or evaluation methodology at an early stage. This allows time to make appropriate changes to a service or adapt the evaluation, including the measures being used.

Careful consideration needs to be given to making service changes part way through an evaluation: if the evaluation tests a particular way of working, what implications would a service delivery change have? This issue is most pertinent to robust evaluation designs such as randomised control trials (RCTs).

The timing of quantitative interim analysis is important and relates largely to why it is being done.

Carrying out analysis too early in the programme creates problems associated with reviewing small amounts of data (as discussed in question 1).

If analysis is carried out much later, consider the benefits of waiting a little longer, then analysing and reporting on the full set of data.

The type and amount of quantitative interim analysis depends on the reason for the analysis and the stage of the evaluation.

If there is insufficient data to conduct statistical analysis then a summary report may be more appropriate.

Any interim findings that are reported should include the necessary caveats, noting that they are not final results and may change.

Avoid over-interpreting your results or drawing extensive conclusions as they are only based on interim findings.

You may carry out interim analysis for purely methodological reasons (to assess the progress of the evaluation or check for gaps in information, for example) and choose not to share these findings with service delivery teams.

If the evaluation team is independent from the service delivery team, there may be pressure to share interim findings to maintain the service provider’s interest and engagement in the evaluation. In this case, evaluators should consider the impact on the independence and robustness of the evaluation.

Another consideration is whether you would want to share interim findings with external professionals, or keep them internal until the final report is produced. If you do publish externally, consider the potential impact if the final results are different. 

A complex process

Deciding if and when to carry out interim analysis and share findings is a complicated issue that differs for each evaluation. 

Thorough consideration of the rationale, implications and type of interim findings you intend to share will help you make an informed decision. 

Like this blog?

Let us know which blog you've read, what you think, share information you have on the topic or seek advice. 

Get in touch

Learn more

Impact and evidence insights

Each week we'll be posting insights from professionals about evaluation methods, issues and experiences in child abuse services and prevention.

Read our blogs

Impact and evidence

Find out how we evaluate and research the impact we’re making in protecting children, get tips and tools for researchers and access resources.
Impact and evidence

How we evaluate our services

We choose the best approach for the question we're trying to answer - whether we're learning about something very innovative or something that's well developed.
Read more