300 likes | 434 Views
Cluster Randomised Trials Of Schools Based Health Interventions. What are the barriers to greater use of RCTs in educational research? Possibilities for progress or Mission impossible? Demonstrate by example that RCTs of complex educational interventions are feasible. Projects.
E N D
Cluster Randomised Trials Of Schools Based Health Interventions • What are the barriers to greater use of RCTs in educational research? • Possibilities for progress or Mission impossible? • Demonstrate by example that RCTs of complex educational interventions are feasible
Projects • Trial of fruit tuck shops in primary schools (FSA) • Trial of emergency contraception lessons (NHS R&D) • ASSIST Trial of peer-led intervention to reduce adolescent smoking (MRC) • Free breakfast initiative in primary schools in Wales (Welsh Assembly Government)
What are the barriers to greater use of RCTs in educational research?
Challenges in applying RCTs to evaluation of educational interventions • Ethical concerns • Randomisation • Recruitment and retention • Scale and Cost • Variability in delivery • Context dependent • Generalisability
Ethical concerns • Often thought unethical to deprive one group of people of the innovative intervention, which is believed or assumed to be beneficial • Contrast to medicine where exposure to untested new treatments often considered unethical • Very common in education, despite: • In medicine, target audience is sick and the moral imperative to do something (‘must be better then nothing’) is great. eg. AIDS, cancer • Frequent examples of new interventions being ineffective or even harmful
Ethical concerns (2) • Is randomisation less fair / ethical than postcode lottery or local policy / bid success? • Only if we are certain that the intervention can do no harm should we • Implement without strong evidence of effect • Begin to think that randomisation might be unethical • How do we define ‘harm’? • Cost / opportunity cost • Raised expectations
Ethical concerns (3) • Not wise / moral / prudent / ethical to conduct a trial unless one has good reason to believe that the intervention may be effective • Theory • Formative evaluation • Principle of equipoise remains
Randomisation • Often impossible / impractical to randomly assign individuals to intervention / control groups • Within one cluster, control subjects liable to be ‘contaminated’ by exposure to some/all intervention activities • Many interventions act explicitly at the cluster level (e.g. class, school) • Randomisation to intervention / control may be undertaken at group level (cluster randomisation) • Usually stratified randomisation or minimisation to ensure reasonable baseline balance
Cluster randomised trials • ASSIST Peer-led smoking intervention • 59 schools randomised • Fruit tuck shops • 43 schools randomised • Free Breakfast Initiative • 57 schools randomised • Emergency contraception • 25 schools randomised
Recruitment and retention • Those recruited to trial should be representative of target population • Participants need to consent to having their treatment determined by randomisation • Thought to be particularly difficult (unethical) in cluster randomised trials • In some cluster trials, those randomised to control may then not maintain their commitment to study • Major threat of differential drop-out
Recruitment and retention • Recruit all schools on basis of equal probability of being in intervention or control group • Clear, honest detailed description of research activities • School research ‘contract’ • Offer equal reward to both groups: • eg. Control schools given cash & buy-out time • Control schools offered intervention at end of measurement period • Maintain motivation • briefings, personal contact • newsletters, prize draws
Experience with recruitment and retention of schools • Recruitment • School recruitment easier than anticipated • Refusal to participate more often due to strong preference regarding intervention than objection to randomisation or data collection requirements • Retention • 5 school cluster randomised trials • 196 schools • 1 – 3 years fieldwork duration • No school drop-outs • 2 closures
Scale and Cost Co-ordination and timeliness • Major challenge in large scale trials • Requirement for: • Communication between researchers and policy/practice • Research networks • Natural experiments • Innovations in policy / practice introduced in an experimental manner, ideally through randomised roll-out
Scale and Cost (2) • Trials, particularly cluster randomised trials, can be large and expensive • Intervention costs • Outcome data collection costs • Natural experiment – no extra intervention costs • e.g. Free Breakfast Initiative • Use of routinely collected outcome data • Education has unexploited resource • Frequently, trials can be very low-cost
Variability in delivery • RCTs traditionally require that interventions are standardised and uniformly delivered • (efficacy trial) • Educational interventions highly dependent on quality of delivery • Value of efficacy trials limited • eg. school smoking education • Results of efficacy trials involving enthused teachers not replicated in roll-out
Efficacy and effectiveness • Efficacy trial • To test whether the treatment does more good than harm when delivered under optimal conditions • Effectiveness trial • To test whether the treatment does more good than harm when delivered via a real-world program in realistic conditions • Pragmatic, allowing variability in delivery as would be experienced in real world
Context dependent • Educational interventions often highly dependent on the context within which they are delivered • Argued therefore that RCTs not suited to their evaluation • However, RCT design has the advantage that randomisation process ensures that systematic differences in external influences between groups do not occur • Will achieve unbiased estimate of average effect
Generalisability • Efficacy trials may demonstrate that intervention has ‘active ingredients’ that work • Effect unlikely to be reproduced in real world • Attenuated by context and implementation • Generalisability of small trials with one educator in one school will be limited
Public Health Improvement: Evidence base conundrum • Good quality trials successfully conducted, evaluating weak interventions. Small or zero effect sizes. • Good quality complex interventions evaluated using weak research designs. Biased effect estimates.
When do we do RCTs? • In medicine, there are distinct phases in the development & evaluation of new interventions (eg. drugs): • Basic research (eg. molecular, genetic) • Applied research & development (eg. pharmacological) • Trials to determine efficacy • Trials to determine effectiveness • Post-marketing surveillance
MRC Assist TrialPeer-led smoking intervention • Theory based (Diffusion of innovations) • Developed from similar approach used in sex education • Extensively piloted • Feasibility trial conducted in 6 schools • Funding for main trial (59 schools) sought and obtained from MRC
Effectiveness trials with embedded process evaluation • Effectiveness trials, implementing interventions in a manner reproducible in real world • Crucial to conduct a comprehensive process evaluation (largely qualitative) within such a trial • Monitor variability in context and delivery • Identify barriers / facilitators • Relate variability in these factors to variability in intervention impact
Fruit tuck shop trial • Minimisation used to ensure balance in terms of • School size • School policy on snacks • Schools given minimal support in setting up tuck shops, with wide variability in detailed operation • Detailed process evaluation • Environment of school and locality • Operation of fruit tuck shops • Detailed case studies of 8 selected schools • Observation, interview, focus groups
ASSIST Trial • Intervention led by specialists, as would be the case if rolled out in the real world • Not to be implemented by untrained, unmotivated teachers • Process evaluation in all 30 intervention schools, with parallel measures in the 29 control schools • In-depth process evaluation in sub-sample • Observations, field notes, diaries, records, interviews with pupils, teachers, staff
Free Breakfast Initiative Trial • 111 schools across 9 LEAs • Variable models of staffing and delivery • Trial powered to identify overall mean effect on dietary and behavioural outcomes • Process evaluation to monitor variation in delivery and identify strengths and weaknesses
A role for RCTs in evaluating health education interventions? • RCTs not always possible! • Difficult to do well, and can be expensive • Take opportunity of natural experiments • Theory-driven development, formative evaluation and feasibility studies essential prerequisites prior to trial • Get the intervention right
Research design • Cluster randomised design • Pragmatic, effectiveness trials • Unbiased estimate of overall intervention effect • Additional qualitative and quantitative data collection to measure variation in context, process, delivery and outcome • Identifies issues for further development of intervention / further testing of its (variable) effect • Hypothesis generation, not testing
The end. Stanley (1957): “Expert opinions, pooled judgements, brilliant intuitions and shrewd hunches are frequently misleading” MacIntyre & Petticrew (2000) “Good intentions and received wisdom are not enough” Laurence Moore Cardiff Institute of Society, Health and Ethics Email: MooreL1@cf.ac.uk Tel: 02920 875387