Skip to content ↓

What is the Impact of Our Interventions?

In my last blog, I argued that we should all be more systematic in evaluating the impact of small group interventions (4-5 children, clear focus for pupil learning identified from AfL and data):

“In preparing these intervention groups, do we plan for baseline assessments and then monitor and assess the impact of the work we do with these pupils? Not nearly enough, I would argue.”

Following this blog and discussion with my colleagues, we developed new approaches to our interventions across school:

  1. We planned baseline assessment and a final assessment for all interventions. We wanted to be able to plan them so we could assess what impact, if any, they had on pupil progress.
  2. Limited intervention group sizes to a maximum of five children
  3. Create an intervention evaluation sheet to use in all pupil progress meetings.

Figure 1: 1:1 readers in Year 2: TA led intervention.

Figure 1 shows the potential impact of reading interventions with a group of low prior attaining children. The pupils’ outcomes have increased significantly based on their test scores. However, we cannot say this progress was attributed to this intervention alone, nor does it compare this to pupils whose attainment is similar who did not have the intervention.

Figure 2: Small group intervention for higher ability mathematicians, teacher-led, with a comparative control group.

Figure 2’s analysis compares two groups of children of similar attainment, with some children having had an intervention, whilst others remained in class as a control group. This approach allowed us to assess impact of the intervention, whilst comparing it to those who remained in class.

How Reliable Are These Results?

Let’s be clear: these evaluations are not statistically significant. The sample size of pupils is small, the control group smaller still, there is no pupil voice (how do these children feel about sometimes missing out on the wider curriculum? How will this fit into the new OFSTED assessment framework?) and how much of an impact can we attribute to this particular intervention?

What these evaluations do give us is a starting point for a professional dialogue at pupil progress meetings across school about why we do the things we do, and what impact they have for our pupils. Both examples allow us to test the impact of interventions on pupil progress, instead of simply ‘trying out’ untested interventions, or worst still, championing interventions that actually do more harm than good.

Bibliography

Education Endowment Foundation (2018) Small-Group Tuition https://educationendowmentfoundation.org.uk/pdf/generate/?u=https://educationendowmentfoundation.org.uk/pdf/toolkit/?id=154&t=Teaching%20and%20Learning%20Toolkit&e=154&s= accessed 16th December 2019

Institute for Effective Education (2019) Engaging with evidence guide https://the-iee.org.uk/what-we-do/engaging-with-evidence/ accessed 24th March 2019