Jack Martin describes 6 common problems in evaluating services for children, and offers advice on how to overcome them
High-quality evidence on ‘what works’ plays an essential part in improving the way social care services are designed and delivered, making sure they result in the best possible outcomes for children and families - from preventing abuse and neglect or changing challenging behaviour to supporting mental health and improving educational attainment. I’m a research officer at the Early Intervention Foundation (EIF), and we’ve conducted over 100 in-depth assessments of the evidence used in programme evaluations. We rate everything against our standards of evidence and have published an online Guidebook about the early intervention programmes that have been shown to improve outcomes for children and young people.
As part of this process, we’ve examined thousands of pages of technical evaluation reports in great detail. The quality of these studies varies. So our assessments consider not only the findings of each evaluation – whether it suggests a programme is effective or not – but also the quality of that evidence. If a study hasn’t been well planned or properly carried out, we can’t always be confident that the findings of the study are robust.
I’m going to share some of the issues that we come across frequently, which undermine the confidence we have in published evaluation results. These six common pitfalls could, in many cases, be avoided or mitigated when an evaluation is being planned or carried out, and this would strengthen the evidence base for children’s services.