330 likes | 342 Views
Population Health. Engaging Consumers, Providers, and Community in Population Health Programs. Lecture c.
E N D
Population Health Engaging Consumers, Providers, and Community in Population Health Programs Lecture c This material (Comp 21 Unit 8) was developed by Johns Hopkins University, funded by the Department of Health and Human Services, Office of the National Coordinator for Health Information Technology under Award Number 90WT0005. This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/.
Engaging Consumers, Providers, and Community in Population Health ProgramsLearning Objectives — Lecture c • Evaluate individual behavior change interventions’ designs. • Evaluate organizational behavior change interventions’ designs. • Evaluate community-level behavior change interventions’ designs.
What Is Program Evaluation? • Program evaluation is a field of study designed to answer whether an intervention had the desired impact or whether a program is on the right track and what might be done to improve it. • Program evaluation takes four forms, each with a separate purpose: formative, process, summative, and cost-effectiveness evaluations. • Practitioners of public health are coming to expect and demand evidence-based programming; evaluation yields evidence with which to design interventions and evaluate their effectiveness.
Key Questions Involved in Designing an Evaluation for a Behavior Change Program • How is the intervention expected to achieve the desired outcome? • Who is the target population for the intervention? • Does the evaluation focus on those enrolled in a particular program, or on all persons who fall within the definition of the target population? • What study design will be used to evaluate impact? • What are the measures of program success? • What are the available data for answering these questions?
On Whom Should an Evaluation Focus? • There are a many ways to focus evaluations: • The smallest frame is the individual. • Other commonly used grouping frames include: 1) organizational members; 2) target populations based on specific factors; 3) communities defined by geographic or other features; or a variety of other modes. • Whether an evaluation should focus on those enrolled in a particular program (program-based evaluation) or all persons who fall within the definition of the target population (population-based evaluation) depends on the objective of the program.
Theories of Change • Strong programs tend to draw on one or more theories of change — either implicitly or explicitly — such as the Health Belief Model, the Social Learning Theory (modeling), the Theory of Reasoned Action, the Diffusion of Innovations Theory, and the Extended Parallel Process Model (fear management).
Understanding the Sequence of Pathways • Fundamental to any evaluation is understanding the sequence of pathways (conceptual framework, logic model, program theory, and program impact pathway [PIP]) that link the program’s intervention(s) to the ultimate health outcome.
The Social-Ecological Model • The most effective behavioral interventions often work at multiple levels – community, organization, family and individual. • The social-ecological model shows that individuals are far more likely to work toward changing their behavior if the social/physical environments not only encourage it, but make it easier.
Structural Interventions • Structural interventions are implementing or changing laws, policies, physical structures, social or organizational structures, or standard operating procedures to bring about environmental or societal change. The individual has little to do in these situations. They are independent of individual volition.
Environmental Interventions • Environmental interventions aim to change behavior by facilitating or inhibiting behaviors through changes in the surroundings. • For example, adding fountains that let you refill your water bottle easily to promote better drinking habits (e.g., water instead of soda).
Organizational Interventions • Organizational intervention relates to policies that facilitate the adoption of health behaviors. • For example: Your company installs a gym or salad bar on the premises – like Google.
Interpersonal and Intrapersonal Interventions • Interpersonal interventions attempt to reach clusters who can then reinforce specific behaviors. • The classic example is Alcoholics Anonymous. • Intrapersonal interventions generally involve health education and counseling provided to one individual at a time.
Experimental Design • The gold standard for measuring impact is the experimental design, used widely in clinical research to evaluate the effectiveness of a given drug or treatment regime. • For example, in some studies one group of subjects gets the experimental drug while others get a placebo.
Randomized Control Trials • The randomized control trial design offers the strongest possible means of controlling for potential confounders (such as selection bias, testing effect bias, maturation bias, placebo effects, and history bias). It is often considered the ‘gold standard’.
Nonexperimental Designs • Non-experimental designs only control for some of the potential sources of bias but are widely used (e.g., a pre-test–post-test design with no control group) under the philosophy that some evaluation is better than none. Also, they are easier to implement in many situations where information is being drawn from sources where no experiment was intended.
Quasi-experimental Designs • Quasi-experimental designs have greater generalizability and control for some, but not all, potential sources of bias; they are used when it is not possible to randomize subjects into treatment and control groups. Most experiments fall into this category. Particularly large or national surveys where taking a census is not possible.
Observational Studies • Observational studies (post-test only among the experimental population) can apply sophisticated analytic techniques (e.g., instrumental variables, propensity scoring, and structural equations) to model causal inferences.
Program Evaluation Often Focuses on Behavioral Outcome • Although causal links between behavioral changes and the health outcomes may be well known, program evaluation often focuses on the behavioral change (measured by self-report) rather than on the long-term health outcome that is biological in nature (measured by some type of biomarker such as the body-mass index). • Observation reduces the bias inherent in self-report, but it may introduce other biases (such as the Hawthorne effect, whereby participants perform better than under normal conditions precisely because they realize that they are being observed).
Formative Evaluation • Formative evaluation is used to obtain qualitative information that will be useful in designing the intervention for maximum effect. • Target population information gathered could include: • Epidemiology of the disease or health condition. • Persons most affected. • Drivers of unhealthful behaviors. • Barriers to change. • Most credible sources of information on the topic. • Information channels. • Formative research can include both qualitative research (which is particularly useful in understanding the mindset of the target population, including their values, attitudes, beliefs, aspirations, and fears that strongly affect behavior) and quantitative research, especially where quantifying baseline levels is important.
Process Evaluation • Process evaluation is used to assess how well the intervention is being implemented (fidelity to design), and includes the following: • Dose delivered • Reach • Level of exposure • Recruitment • Context
Summative Evaluation • Summative evaluation measures whether change occurred as a result of the intervention. • Ideally an intervention would be evaluated by its long-term effect on health status (i.e., mortality or morbidity). • Summative evaluation generally attempts either to establish causality or (with weaker designs) to tease out causal inferences.
Outcome Evaluation and Impact Evaluation • Outcome evaluation refers to assessing changes in a given outcome without necessarily attributing it to an intervention. • Impact evaluation is for a rigorous study design capable of demonstrating cause and effect, not just plausible attribution.
Cost-Effectiveness Evaluation • Cost-effectiveness evaluation is a specialized form of impact assessment that extends beyond measuring the extent to which change occurred to quantifying the cost per unit of change. • It requires both careful tracking of the costs of the intervention. • Despite its complexities, cost-effectiveness evaluation answers the question that decision makers most often want to know, “What is the `return on investment’?”
Program Evaluation Example: Increasing Hand Washing Compliance with a Simple Visual Cue to Action — 1 • Hand hygiene is the single-most effective method of preventing the spread of infections associated with health care. Despite the established benefits, health-care workers tend to have suboptimal hand hygiene practices. • Recent data suggest that a multifaceted intervention, including the use of feedback, education, the introduction of alcohol-based hand wash, and visual reminders, may increase adherence to hand hygiene recommendations. • “Assess then revise, assess then revise” was a process evaluation used to improve the ultimate effectiveness of the intervention. • The gold standard for testing an intervention is the experimental design, which would be the most rigorous type of summative evaluation. • The dearth of published information on formative evaluation of hand-hygiene interventions underscores the need for those designing such interventions to develop a clearer understanding of why hand hygiene behavior is not more prevalent in medical care delivery settings.
Program Evaluation Example: Increasing Hand Washing Compliance with a Simple Visual Cue to Action — 2 • Results published the American Journal of Public Health. • Establish and refine methodology for future research efforts in this area.
Program Evaluation Example: Increasing Hand Washing Compliance with a Simple Visual Cue to Action — 3 • Study purpose: to assess the impact of a visual cue on hand washing compliance in public facilities. • Visual cue: the presentation of a towel by an automatic dispenser. • Hand washing compliance indicated by towel and soap usage.
Program Evaluation Example: Increasing Hand Washing Compliance with a Simple Visual Cue to Action — 4 • Methodology • Eight bathrooms (four male and four female) with 16 enMotion™ towel dispensers and eight soap dispensers at the Bryan School of Business and Economics building at the University of North Carolina at Greensboro were used in the study. • Towel dispensers were set to either the “Towel Presented” or “Towel NOT Presented” condition on alternating weeks for 10 weeks. • Wireless infrared sensor were used to record traffic volume in bathrooms. • Towel and soap usage was recorded each week to indicate rates of hand washing compliance (HWC).
Results — 1 8.09 Graph: Ford, E. W., Boyer, B. T., Menachemi, N., & Huerta, T. R. (2014, October). Increasing hand washing compliance with a simple visual cue. American Journal of Public Health, 104(10), 1851–1856.
Results — 2 Note: Asterisk above “Towel” bar indicates a statistically significant difference at P = .05. 8.10 Graph: Ford, E. W., Boyer, B. T., Menachemi, N., & Huerta, T. R. (2014, October). Increasing hand washing compliance with a simple visual cue. American Journal of Public Health, 104(10), 1851–1856.
Results — 3 Note: Asterisk above “Towel” bar indicates a statistically significant difference at P = .003. 8.11 Graph: Eric W. Ford, PhD, Department of Health Policy and Management, Bloomberg School of Public Health, Johns Hopkins University (2016).
Engaging Consumers, Providers, and Community in Population Health ProgramsSummary — Lecture c • Evaluate individual behavior change interventions’ designs. • Evaluate organizational behavior change interventions’ designs. • Evaluate community-level behavior change interventions’ designs.
Engaging Consumers, Providers, and Community in Population Health ProgramsReferences — Lecture c References Campbell, D. T., & Stanley, J. C. (1973). Experimental and quasi-experimental designs for research (10th ed.). Chicago: Rand McNally College Publishing Company. Cronbach, et al. (1980). Toward reform of program evaluation. JSTOR. Valente, T. W. (2002). Evaluating health promotion programs. Oxford: Oxford University Press. Charts, Tables, Figures 8.09 Graph: Ford, E. W., Boyer, B. T., Menachemi, N., & Huerta, T. R. (2014, October). Increasing hand washing compliance with a simple visual cue. American Journal of Public Health, 104(10), 1851–1856. 8.10 Graph: Ford, E. W., Boyer, B. T., Menachemi, N., & Huerta, T. R. (2014, October). Increasing hand washing compliance with a simple visual cue. American Journal of Public Health, 104(10), 1851–1856. 8.11 Graph: Eric W. Ford, PhD, Department of Health Policy and Management, Bloomberg School of Public Health, Johns Hopkins University (2016).
Population HealthEngaging Consumers, Providers, and Community in Population Health ProgramsLecture c This material (Comp 21 Unit 8) was developed by Johns Hopkins University, funded by the Department of Health and Human Services, Office of the National Coordinator for Health Information Technology under Award Number 90WT0005.