Randomised Controlled Trials (RCTs) are a widely-used research design where the results obtained are able to indicate whether an intervention has made an impact. So, can this method be used in a school setting?
In this Q&A, we speak with Dr Drew Miller from the University of Newcastle about what RCTs involve and how they can benefit both the school and educational research communities.
Can you outline your current role and your background in education?
I'm a senior lecturer in the School of Education at The University of Newcastle, and I sit within the Teachers and Teaching Priority Research Centre (TT-PRC) which is headed up by Laureate Professor Jenny Gore. I'm a PE teacher by training, and moved into research in human physiology and physical activity, running randomised controlled trials (RCTs) in these settings before moving back into education. I started researching how children learn physically though involvement in game play and ran several randomised trials in primary school settings to test the efficacy of delivering a game-based PE intervention through a professional development program.
My experience designing and running trials in schools has led to the role I currently play in the research centre, which is in the design of the quantitative side of the mixed-methods studies that we run. We are currently setting up a five year program of research involving a range of randomised mixed-methods trials and evaluation of broader scaling of the Quality Teaching Rounds professional development approach in schools. This research has been funded by the Australian Research Council and the Paul Ramsay Foundation.
You're presenting on the topic of Programmatic Research including RCTs: Why It Matters in Schools at the Forum on the use of RCTs in Education in November. For those who may not know, what is a RCT?
A Randomised Controlled Trial is the standard definition given to a study in which some form of intervention (e.g. a professional development program) is evaluated against some other form of practice which is defined as a control condition (e.g. another common professional development practice that is widely used). The idea, with the examples used, is to ascertain if the new practice produces more effective results than the common practice currently in place for some key measure (e.g. student performance or some form of teaching practice we consider to be important).
Schools are complex places, having lots of variation within them (e.g. student achievement levels), and between them (e.g. socio-demographic features). Because of this variation – and that in many cases we tend to work around a class of students getting some form of intervention (rather than an individual student) – the design of studies has to consider the differences within and between schools that may have an effect on the results. This makes it a bit of a scary area to get into, because the whole idea is to reduce the amount of bias from the differences that could end up giving results that may not be entirely trustworthy. Thus, the reason for the RCT Forum is to share what we have learnt over time in running these trials in schools.
What are the benefits of RCTs in schools for the research community?
For the research community, well designed RCTs that are run as a part of a broader scheme of research offer: persuasive evidence; and opportunities for program refinement. Evidence from an RCT is persuasive because the program being tested doesn't just have to demonstrate a change in outcomes, but a change relative to the outcomes currently being achieved. This means for some key measure, the program designers are able to say it is worth the change, and those looking to use the program can make judgements as to whether it is worth changing based on the outcomes in relation to other aspects (e.g. need to retrain, costs, time etcetera). The RCT process, especially mixed-method trials in real world settings (effectiveness trials) offer great insight into how programs can be delivered more effectively within schools. This means that programs, and/or the way they are implemented in schools, can be modified to ensure that the best possible results are obtained if a program is going to be scaled up across a group of schools, or across a state system, for example.
It is the process of determining if an intervention is effective when delivered in ideal conditions (efficacy testing), and if this effectiveness is retained when delivered in real-world conditions (effectiveness testing) before the intervention is scaled more broadly to schools that forms the process of Programmatic Research. My presentation outlines how this is done, and gives some examples of what can occur if this process isn't followed.
What are the benefits of RCTs for schools?
For schools, it becomes about being able to make decisions based on evidence of high quality. RCTs are not the be-all and end-all of evidence, as the broader consequences of interventions or practices within complex social environments cannot be distilled into the effect size of a single primary outcome, and schools still need to consider a broad range of evidence. However, much decision making in education has been made on cross-sectional evidence (e.g. higher levels of some teaching practice are associated with higher academic achievement). Associations of this nature often don't hold up when interventions to try and increase these teaching behaviours are tested using experimental research involving control conditions.
Schools having access to the quantitative and qualitative evidence gathered through well designed mixed-methods RCTs enables them to make clearer decisions about how they will spend their money and time creating environments in which children can better achieve a range of outcomes that are valued by the school community.
As a school leader, how do you make decisions about how best to spend money and staff time when it comes to improving student outcomes? How does evidence inform your decision making?
Dr Drew Miller is presenting at the Forum on the use of RCTs in Education at University of Newcastle on 28-29 November 2018.