680 likes | 824 Views
Evidence-Based Evaluation for Victim Service Providers. Anne P. DePrince, Ph.D. Department of Psychology Center for Community Engagement and Service Learning. Game Plan: Guiding Questions . 1:45-2:45 Goals for gathering evidence What to measure? 255-3:15 Selecting measures (Part 1).
E N D
Evidence-Based Evaluation for Victim Service Providers Anne P. DePrince, Ph.D. Department of Psychology Center for Community Engagement and Service Learning
Game Plan: Guiding Questions • 1:45-2:45 • Goals for gathering evidence • What to measure? • 255-3:15 • Selecting measures (Part 1) • August • Selecting measures (Part 2)? • Who to measure? • When to measure? • Costs (to respondents) of measuring?
Perspectives I bring • Researcher, TSS Group • Director, Center for Community Engagement and Service Learning (CCESL)
Big Picture • Evidence-based evaluation never happens in a vacuum • Evidence goals should be directly tied to strategic planning/program goals • Evidence-based evaluation ends up in a black hole • Think about uses of data on front end
Goals for gathering evidence • Program evaluation • Why do we do what we do; and does it work? • Do your programs/services/interventions work? • Funders care about this too…
Goals for gathering evidence • Building evidence-based evaluate capacity helps your agencies as research consumers • Understanding measurement issues helps you evaluate evidence for various practices
Goals for gathering evidence • Descriptive • What new trends need to be addressed? • Have we accurately characterized the problem?
Goals for gathering evidence • Generative and Collaborative • Building programs based on theory and data
What to measure Monitoring/ Process Measuring/ Outcome What changes occurred because of your program? knowledge attitude skill behavior expectation emotion life circumstance • What (how much) did clients receive? • How satisfied were clients?
What to measure Monitoring/ Process Measuring/ Outcome Increase in knowledge about safety planning Increase in positive preceptions ofcriminal justice process Increase in engagement with criminal justice process • # clients served • # calls on crisis line • # prevention programs delivered • # clients satisfied with services
General Issues in Measuring • Building blocks to get to your specific evidence-based evaluation question
You start with a theory of change Based on Theory A, we believe that increases in Victim Safety will lead to lower psychological distress and greater engagement with c.j. system
From theory, you identify constructs important to your agency/program • A variable, not directly observable, that has been identified to explain behavior on the basis of some theory Construct Example: Victim Safety Based on Theory A, we believe that increases in Victim Safety will lead to lower psychological distress and greater engagement with c.j. system
Change at Your Agency • Pick one program within your agency • What is the theory of change that underlies that program? • What are you trying to change? • What factors lead to the change you want? • Identify 3-4 of the MOST relevant constructs tied to this program’s theory of change.
Back to that construct What on earth is Victim Safety?
Measuring the weight of smoke Victim Safety
Defining terms in evaluation • Variable • Any characteristic or quantity that can take on one of several values
Different kinds of variables What comes first What comes next Outcome (dependent) variable Aggressive behavior Low/high Parenting skills Ineffective to effective • Predictor (independent) variable • Program • Program A versus B • Program A versus no program • # sessions • Restraining order • Yes/no
Different kinds of variables • confound – any uncontrolled extraneous (or third) variable that changes with your predictor and could provide an alternative explanation of the results. Earlier SANE exam Better mental health Earlier victim advocacy
Operationalize variables • Operationalize definitions – • make your variable specific and limiting. • Get to something we can observe • Example: • Big difference between saying I want to: • “treat trauma” • “decrease children’s hyper-vigilance and avoidance”
Different kinds of measurements…give you different information • Nominal • Values differ by category, there is no ordering • Can’t calculate an average • a.k.a qualitative, dichotomous, discrete, categorical • Least information • Examples: • Sex, Race, Ethnicity • Anything answered as Yes/no; Present/absent
Scales of Measurement • Ordinal • Values have different names and are ranked according to quantity. • e.g., Olympic medals • Example • Divide people into low, moderate, and high service needs • ** You don’t know the exact distance between two values on an ordinal scale; you just know high is higher than medium, etc.
Scales of Measurement • Interval and Ratio • Spacing between values is known (so you know that not only is one unit larger or smaller, but by how much it is larger or smaller) • Examples • Scores on • a measure of PTSD symptoms • a test of knowledge of safety planning • Number of • calls for service • revictimizations
How to decide on a measurement scale • Choice of scale affects the amount and kind of information you get • And generally the less information you get, the less powerful the statistics you can use • Interval and ratio scales provide the most information • But you can’t always use them – e.g., sex • GUIDING RULE: When you can, always go for more info (interval/ratio)
At your agency • How are you currently measuring the 3-4 constructs you identified? • What kind of measurement scale?
Relevant Measure Websites • Measuring Violence-Related Attitudes, Behaviors, and Influences AmongYouths: A Compendium of Assessment Tools - Second Editionhttp://www.cdc.gov/ncipc/pub-res/measure.htm • Measuring Intimate Partner Violence Victimization and Perpetration: ACompendium of Assessment Toolshttp://www.cdc.gov/ncipc/dvp/Compendium/Measuring_IPV_Victimization_and_Perpetration.htm • http://mailer.fsu.edu/~cfigley/Tests/Tests.html ‘ • http://vinst.umdnj.edu/VAID/browse.asp
What to look for in a good measure? • Validity and Reliability
What is Validity? • “truth” (Bryant, 2000) • Degree to which our inference or conclusion is accurate, reasonable and correct.
Examples of Types of Measurement Validity:Face Validity • The measure seems valid “on its face”. • A judgment call. • Probably the weakest form of measurement validity. A measure of anxiety includes items that are clearly about anxiety
Examples of Types of Measurement Validity: Construct Validity • Extent to which an instrument measures the targeted construct • Haynes, Richard & Kubany, 1995.
Construct Validity • Like Law… • The truth, the whole truth and nothing but the truth • The construct, the whole construct and nothing but the construct Trochim, 2001
Construct Validity Goal Measure all of the construct and nothing else. Other construct: A Other construct: B The construct Other construct: C Other construct: D Trochim, 2001
Construct Validity Goal Measure all of the construct and nothing else. Generalized anxiety Depression Social Anxiety Self-esteem Guilt Though, in reality, the construct is related to all 4 things.
Reliability • You are measuring the construct with little error • Versus accuracy • Versus validity • Something can be reliable but not valid • But, if something is not reliable, it cannot be valid b/c then your measure is only measuring random variability
How can we improve reliability • By reducing error • Standardization • When measurement conditions are standardized, sources of variance become constants and therefore do not influence the variability of scores • (Strube, 2000). • Aggregation • With more items, error might cancel itself out • Error on first item might be + and on second -
Random Error Frequency X The distribution of X with no random error The distribution of X with random error Trochim, 2001
Systematic Error • Any factors that systematically affect measurement of the variable across the sample. • Systematic error = bias. • e.g., asking questions that start “do you agree with right-wing fascists that...” will tend to yield a systematic lower agreement rate. • Systematic error does affect average performance for the group. Trochim, 2001
Systematic Error Notice that systematic error affects the average; called a bias. Frequency X The distribution of X with systematic error The distribution of X with no systematic error Trochim, 2001
This validity and reliability business is why evaluators make you crazy with… • Worries that you’ve made up your own evaluation instrument • Requests to standardize how evaluations are implemented (a new intern administers an interview vs. a seasoned staff member)
When you can… • Use existing measures that have some evidence of reliability and validity (and then brag that you are doing so) • Standardize assessment procedures • Be thoughtful about number of respondents
Self-report surveys/questionnaires • Importance of how you ask what you ask… • Examples: • Exit Polls 2004 • Open-ended versus structured questions • 2013 Obamacare v. Affordable Care Act • Wording matters…a lot!
Writing Surveys • Types of questions • Decisions about question content • Decisions about question wording • Decisions about response form • Placement and sequence of questions
Keeping in mind: Bias • Can social desirability be avoided? • Can interviewer distortion and subversion be controlled? • Can false respondents be avoided?
Types of Questions • Unstructured • Structured
Male Female Structured Questions Dichotomous