EEF Blog: Our Teaching and Learning Toolkit - what are the risks and how do we address them?

Since its launch in 2011, the Sutton Trust – EEF Teaching and Learning Toolkit has helped to bridge the chasm that has long existed between educational research and classroom practice. It does so by providing teachers and school leaders with an accessible summary of thousands of individual pieces of academic research. Up to two-thirds of schools now use it to inform their teaching practice and spending decisions, according to recent surveys.

But there are inevitably risks involved with simplifying so much complex information. We want schools and teachers to be confident in the Teaching and Learning Toolkit as a robust resource they can turn to when looking for evidence to improve outcomes for their students. Here we look at the most commonly cited risks and explain how we address them.

Risk 1: ‘The Toolkit’s league table format combines and aggregates the results of sometimes very different studies.’

The Toolkit is a meta-analysis, which means it combines findings from studies which are not identical to provide more robust estimates. However, we include steps in our process to ensure that we do not combine results in a way that is misleading or unhelpful, and we are stricter in what we do or do not include compared with some other major meta-analyses of education research.

The Toolkit combines existing meta-analyses, rather than individual studies, so generally speaking we are working with decisions that other academics have already made about whether individual studies are similar enough to be combined. While this is no guarantee that the decisions are always the right ones, it means that we don’t and can’t selectively aggregate studies in order to achieve particular answers.

Risk 2: ‘Many of the studies in the Toolkit analyses contradict the headline figure.’

The nature of a meta-analysis is that it aggregates a number of different effect sizes from different studies into a single effect size. It is, therefore, unlikely that any effect size we get from meta-analysis would not differ from any of the individual ‘sources’.

To address this, we look at the extent of variation among the individual effect sizes to determine how reliable the overall effect size will be. This is one of the factors we use when assessing the ‘security’ rating for each strand, a five point padlock scale which is presented next to the effect size (months of additional progress) for each Toolkit strand.

Risk 3: ‘Many of the studies analysed in the Toolkit are context-dependent.’

This is why we are funding and testing specific interventions for the most promising approaches featured in the Toolkit. To date, we have funded 133 projects and published the independent evaluation reports of 66 of these. We publish all the findings, including details of where they were tested and how they were implemented. Related projects are available on each Toolkit strand page.

We continually emphasise that the Toolkit can only ever be a support to professional judgement, never a substitute for it: teachers’ knowledge of the individual schools is crucial. We have developed an online DIY Evaluation Guide which assists schools wanting to find out if new approaches are working well in their setting.

Risk 4: ‘The individual studies that go into the headline impact estimate are invisible.’

The Toolkit provides references to all the studies included in each meta-analysis, and so users have access to all the publicly available information on the precise nature of the interventions.

However, there is a general challenge regarding the amount of detail that academic studies provide regarding the specific intervention that has been studied. This makes it difficult for schools to know exactly how to implement a particular intervention, and makes it difficult for researchers to replicate studies. We have recently published a guide to implementation and process evaluation with the aim of encouraging better detailing of these aspects of studies in publications.

Risk 5: ‘The focus of the Toolkit is on attainment, without considering other aims of education.’

The months’ of additional progress figure is an estimate of impact on academic attainment, and the Toolkit is transparent about this. However, we know that academic attainment is not the only aim of a good education. This is reflected in our new programme of work on character and life skills, which includes a number of projects designed to develop in children and young people a set of attitudes, skills and behaviours (such as self-control, social skills, motivation, and resilience), as well as improve their academic outcomes.

Risk 6: ‘The Toolkit is not solely focused on research relating to disadvantaged pupils.’

The Toolkit considers the impact of interventions on all children, not only those suffering from poverty-related disadvantage. Where the research evidence supports particular findings related to disadvantaged children, they are highlighted in the Toolkit strand pages.

The Teaching and Learning Toolkit is a live resource updated with a team of academics at Durham University. This means that schools and teachers can be confident that when consulting the Toolkit they are always working with the most up-to-date and high-quality evidence available, including the findings from EEF-funded projects. We welcome discussions about how to develop this process further so that we can continue to bridge the divide between research and practice in an accessible and practical way.