860 likes | 1.15k Views
Randomised Trials. Masoud Solaymani-Dodaran Iran University of Medical Sciences. How do we know a treatment works?. All who drink of this treatment recover in a short time, except those whom it does not help, who all die, it is obvious , therefore, that it fails only in incurable cases"
E N D
Randomised Trials Masoud Solaymani-Dodaran Iran University of Medical Sciences
How do we know a treatment works? All who drink of this treatment recover in a short time, except those whom it does not help, who all die, it is obvious, therefore, that it fails only in incurable cases" Galen (129-c. 199) cited from “Epidemiology” by Gordis
The randomized trial is considered the idealdesign for evaluating both the effectiveness and the side effects of new forms of intervention.
An unplanned trial 1510-1590 Ambroise Pare, the surgeon (1510-1590) Boiling oil finished He used a mixture of Yolk of egg, oil of rose, and turpentine The day after the results were amazing He decided to never cauterize again
A planned trial,James Lind 1747 Scurvy killed thousands seaman each year Lind learned of sailor recovering from scurvy on a diet of grasses
47 year wasted His explanation of dietary cause for scurvy was not acceped It took 47 years for British Admiralty to let him repeat the experiment On entire fleet of ships Dramatic results 1795: lemon juice standard part of british seaman’s diet (limeys)
General design of a randomised trial What do we expect if new treatment works? Or no treatment at all
Selection of subjects Written and clear criteria The test is to give the same result not matter who applies the criteria No room for subjective variability Easier said than done
Question? • Why shouldn’t we just give the new treatment to people and see if it works?
Subject allocation: Studies without a control group1 The story of great Boston surgeon Vascular reconstruction on a large number of patients “Did I not operate on half of my patients?” That would have doomed half of them to their death
Coincidence The story of the man in the bathtub The question is if we administer a drug and patient gets improved; Is one the cause of the other? “Results can always be improved by omitting controls” Professor Hugo Muensch of Harvard University
Historical controls we go back to the records of patients with the same disease who were treated before the new therapy became available
Problems with historical control Gathering data with different intentions, quality of data collection Many things other than therapy will change over calendar time (living conditions, nutrition, life style, etc)
Simultaneous nonrandomised controls Story of sea captain with anti-nausea pills
Predictability of assignment system, role of the investigator Trial of anticoagulant therapy after world war II Even and odd days for receiving and not receiving intervention
BCG Vaccination for tuberculosis, role of subjects Subjected decided who wants to be vaccinated Subjected were allocated in an alternative fashion
Randomization Randomisation in effect means tossing a coin to decide the assignment of a patient
Practical You have been asked to determine patient allocation for a study which tests two new forms of drugs for treatment of psoriasis. Using random table randomise 30 patients to two intervention and one control group.
What do we achieve by randomization Equal chances for any subject to enter either the treatment or control group Comparable groups Balanced distribution of confounders even for confounders that we don’t know
Practical: describe what you see Randomised Not Randomised
Stratified randomisation Why? Because in small numbers the groups might not still be comparable
Data collection: Outcomes Primary and secondary Desired effects, side effects Robust and standardises methods of measurements
Data collections: Prognostic profile at entry Baseline information To check comparability of the groups
Masking (Blinding) • Why we should mask? • Enthusiasm, certain psychological factors • Trial of Vit C in common cold • Comparing those thought to have received placebo and those thought to have received treatment
Side effects in those receiving placebo • Difference in the two groups are important not just the shear amount
Factorial design • Testing two drugs • Modes of actions are independent
Factorial design, example of Aspirin and Beta-carotene study • The aspirin part of study was terminated, because of obvious results in 44% reduction of myocardial infarction • Beta-carotene continued for 12 years and showed no effect in reducing cancer or heart disease
Non-compliance (dropouts) • Overt: people stop participating • Covert: stopping without admitting • Tests can be done e.g. urine test for metabolites
Drop-ins • Aspirin and Beta-carotene trial • Buying aspirin over the counter • Controls were provided with a list of drugs they should avoid • Urine test for salicylates was done
What can be done to avoid non-compliance • Trial of treatment of hypertension • Pilot study was done to separate non-compliers • The problem may be lack for generalisability
The net effect of non-compliance • Reducing observed differences • Underestimation • Example of clofibrate and placebo to reduce cholestrol
Sample size • How many subjects do we have to study?
Comparing two populations Two Jar of beads each containing 100 beads Whether distribution of the beads by colour differs in jars A and B?
From sample to the whole population • When we study we only compare samples • But we generalise our conclusion to the whole population • Therefore there are always the possibility of errors
Summary of terms α P-Value β Power
Factors you need to calculate sample size • The difference in response rate to be detected • An estimate of the response rate in one of the groups • Level of significance (Alpha error) • Power (Beta error) • Whether the test should be one-sided or two sided
What are one sided and two sided tests? • Example • Our present cure rate is 40% • The new treatment is expected to increase to 60% • The difference is 20% • Are you sure that is what you are going to find? If yes you can use one sided test • If not you better test in both directions (two sided tests)