290 likes | 306 Views
This guide covers the importance of evaluation in cardiovascular disease control programs, methods, and key considerations for assessing program effectiveness and impact. It delves into the scope of evaluation, normative and evaluative research approaches, methodological issues, and measures used in evaluating CVD programs.
E N D
Design and conduct of evaluations of CVD control programs (part I) Gilles Paradis, MD, MSc, FRCPC Jennifer O’Loughlin, PhD McGill University Health Center Department of Epidemiology and Biostatistics McGill University
Outline Part I Why evaluate? What’s evaluation Evaluate what? Scope of evaluation Methodological issues
Why evaluate? 1. Accountability Report on the attainment of objectives and use of limited resources 2. Improvement Treatment, program performance. 3. Advocacy Enhance programs, build consensus, support coalitions
Why evaluate? • Social responsibility beyond "Primum non nocere" • Many (well established) interventions have been subsequently shown to be useless or harmful • M.I.:Prolonged bed rest Magnesium Class I antiarrhythmics Ca++ channel blockers • Prevention: carotene HRT (?)
What is evaluation? Process of systematic data collection or information gathering to shed light on some aspects of an action or intervention Respond to specific questions regarding a program "Who is being reached by…?" Support decision making "Which of two alternative strategies is more effective?"
What is evaluation? Improve the understanding of mechanisms of action "How can I reach low SES populations with this program?" Enhance community participation "What are key community concerns?" Support community mobilization "What do key stakeholders expect from a coalition?"
Evaluate what? • Primary prevention programs Reduce exposure to risk factors Decrease incidence • Secondary prevention Prevent progression among affected asymptomatic individuals (HBP, …) Screening, case-finding
Tertiary prevention Decrease morbidity, mortality among symptomatic individuals Improve QOL, functioning • Individual practice Diagnostic, preventive, therapeutic • Organizational or community changes Structural (inputs, resources mobilized) Process (quality of services) Outcomes (attainment of objectives)
Scope of Evaluation Broad approaches 1 - Normative 2 - Evaluative research
Scope of Evaluation 1 - Normative 1.1 - Quality of preventive care • GOAL: Compare practices to standards of excellence or criterias • EXA: Rules for use of resources • Who gets fasting lipoprotein profiles? • Who gets 24 hour BP monitoring? • Streptokinase or tpa? Criterias of quality preventive care • Management of HBP, type II diabetes • Management of pts with IHD • METHODS: Chart audits Surveys
Scope of Evaluation 1 - Normative 1.2 - Quality of programs • GOAL: Structural: Appropriate use of resources? Process: Target population attained? Program implemented as intended? Impact: Were objectives achieved? • EXA: HBP screening in worksites • Methods: Review of reports, existing databases Key informant interviews Surveys 1.3 - Evaluation of (public health) organizations • Structure, functioning, planning, etc.
Scope of Evaluation 2 - Evaluative research Efficacy Effectiveness Efficiency (cost-benefit, cost-effectiveness) Quality of preventive care (decision analysis)
Methodological issues 1 - Specification of theoretical model 2 - Design 3 - Measures (what and how) 4 - Biases 5 - Analysis
Methodological issues 1 - Theoretical model • Avoid “Black Box” phenomenon • Observe connecting processes between inputs and outputs • Key to understand and improve interventions • Describes how program produces the effect • Blueprint for selection of variables, guiding analysis, interpreting results
Methodological issues 2 - Design • General model Initial state Subsequent state t0 Intervention t1 Effect of intervention, time or other? Initial state Subsequent state t0 Intervention t1 Initial state Subsequent state
Methodological issues 2 - Design • Repeat cross-sectional surveys • Cohort • RCT • (Case-control)
Cohort Individual behavior change Non-anonymous participation Attrition related to behavior evaluated Repeat testing, co-intervention Maturation, aging More long-term residents 1- Repeat C / S Community-wide prevalence Anonymous Less of a problem Cross-contamination 1- Methodological issues 2 - Design
Methodological issues 2 - Design RCT • Unbiased allocation • Similar distribution of R.F. (known or unknown) to groups • Comparability of groups • Validity of statistical tests • Feasability, costs • Other options to minimize biases (matching, stratification, …)
Methodological issues 3 - Measures 3.1 - What? • Mortality, morbidity • Q O L • Risk factors • Behaviors • Physical and social environments Proximal impact easier to measure than distal
Methodological issues 3 - Measures 3.2 - How? Reliability Validity • Self-reported behaviors • Social desirability • Pre-testing instruments • Objective measures / Gold standard • Environmental measures(shelf space, no-smoking signs, …) • Surrogate reports from next of kin • Bogus measurements
Methodological issues 4 - Biases “Distortion in the estimate of effect of an exposure” due to • Selection of subjects • How information is collected • Confounding
Methodological issues 4 - Biases Community programs particularly prone to biases • Random allocation is rare • Limited # of clusters • Important differences between groups (absolute and secular trends) • Multiple co-interventions • Blinding is impossible
Methodological issues Solutions: • Matching: • # of pairs • # measurements
Methodological issues 5 - Analysis Effects measured at the individual level but allocation and intervention are at the community level 1- High intra-class correlations Biased standard error at the individual level ( false - positive results) Standard error must be computed at the community level • requires N • adjustment for sampling procedures • # data collection