330 likes | 475 Views
Building Support through Effective Communication Strategies . Building Support through Effective Communication Strategies . Robert Kight Chief , Division of WIA Adult Services and the Workforce System, ETA Steven Baker
E N D
Building Support through Effective Communication Strategies Robert Kight Chief, Division of WIA Adult Services and the Workforce System, ETA Steven Baker Vice President of Marketing and Communications, Jobs for the Future Jacob Klerman Principal Associate/Scientist, Abt Associates
Here’s What to Expect • Cover the basics of good story telling • Framing your project’s story • Develop a working draft • Leveraging evaluation results
How to use your story • Internal communications • Talking points/speeches • Presentations • Funding proposals/reports • Articles/publications • Web/social media • Press releases/media coverage • Community engagement
Fundamentals of a good story • It is clear and understandable, no matter how technical the work may be • It is relatable; the story connects the work to people • It is compelling; the outcomes/benefits are easily understood • It is motivating; your audience wants you to succeed and they want to help you
Framing your story First decide: • Who are your primary audiences? • Program participants, businesses, funders, community leaders, policy makers • What do they care about? • Populations, efficiencies, outcomes • What will you want them to do? • Give direct support • Advocate on your behalf
Drafting your story • What are the problem(s) you’re trying to solve? • Frame in terms of both systems and people • What is your solution? How will it help? • Avoid jargon; keep it simple • How is your solution unique? What’s innovative? • Don’t forget to take credit; you’re the hero! • What are the intended outcomes? What will be different? Better? • Who will benefit? Why should people care?
Using Evaluation Results Jacob Alex Klerman WIF Grantees Conference Washington DC, March 2014
Outline • WIF as the “Vanguard” • Impact Results • Earlier Results • An Exercise
Outline • WIF as the “Vanguard” • Impact Results • Earlier Results • An Exercise
WIF Grantees are the “Vanguard” • Until recently, social policy in general and workforce strategies in particular have been set and funds allocated based on “plausibility” • And, rigorous impact evaluation suggests that impacts have often been mediocre • Increasingly (Obama Administration’s Evidence Agenda, ED’s i3, CNCS’s SIF), we see a move towards a new and better strategy: “Evidence Based Policy” • Pilot, rigorously (impact) evaluate, replicate • Broad scale rollout only after success at earlier steps • Leading to better programs and (any and) larger impacts WIF is a key component of the strategy
Perhaps “Bleeding Edge” … • Being part of the “vanguard” is not easy • We are moving towards “evidence based policy” exactly because, when evaluated, many programs will not be found to be effective • So participating in WIF—and being evaluated—is the “right thing” • But, the results are not always pleasant • The rest of this talk considers how to deal with that reality
Turn Burden into Selling Point • In your materials, note that you are “doing the right thing” • Stepping through the tiers of evidence • And, along the way, showing that: • Your program can actually implemented(i.e., a successful pilot; see third part of talk) • You are incremental tweaking your program(again, see third part of talk) • You can work constructive with an evaluator • You have good internal data systems that can support an evaluation • And, we hope building increasingly high quality evidence of impact(see the next section of the talk) … which are valued by “Evidence Based Policy” funders
Outline • WIF as the “Vanguard” • Impact Results • Earlier Results • An Exercise
Talking about Results;Especially if they are “Good” • Evaluator should produce a detailed report • Background, methods, data, results • Someone (evaluator or grantee) should produce a brief “Executive Summary” • Standard format is one page, two sides • See handout with example
Talking about Results;Especially if they are “Good” Executive SummaryStandard Outline • Overview: One paragraph program and findings • Background • Program Design • Methods • Implementation findings • Impact findings, with 1-2 figures • Discussion • References • Evaluator should produce a detailed report • Background, methods, data, results • Someone (evaluator or grantee) should produce a brief “Executive Summary” • Standard format is one page, two sides • See handout with example You really need to do this, whether results are “good” or “bad”
Strength of Methodology • “Evidence Based Policy” is associated with “tiered evidence” • Programs move up the “tiers of evidence” • Pilot->pre/post->QED->random assignment->replication • And only then to broad program roll-out • Claim credit for working through the tiers, but, don’t over-state the strength of the evidence • Random assignment provides the strongest evidence of impact • Pre/post and QED tend to over-state impact • They should, therefore, be taken (primarily) as evidence that moving to the next tier of evidence is appropriate Exercise 1: Grantees and evaluators should discuss the evaluation’s methodology and how they will describe it and its strength
Consider Precision Carefully • A nearly ideal example: SEIS/Sectoral Employment Impact Study (Maguire, et al. 2010) • Random assignment evaluation found clear evidence of larger impacts • Impact on total earnings of 24 months since randomization • Why do we (“the policy community”) say that?
Several Very Different Cases Just Can’tTell No Impact • Very different cases; “Just Can’t Tell” is more common
Outline • WIF as the “Vanguard” • Impact Results • Earlier Results • An Exercise
So Far, Discussed Impact • “Impact”: Treatment vs. Control for “Long-Term Outcomes” • Not revealed until “later” • Not useful for short-term adjustments
Use Logic Model to Refine Program • Many “early steps” (inputs, activities/outputs, and short-term outcomes) can be observed early on, in your treatment group, at low cost
Learning from Early Results • A program’s Logic Model describes necessary—but perhaps not sufficient—early steps to achieve meaningful impact • You are currently implementing the program; you can check those early steps • Epstein and Klerman (2012) note that you will often find that the early steps are not achieved • If your program is not achieving early steps, you can adjust the program design and implementation • Now, during this implementation (before impact results are known) • For the next implementation (in response to impact results)
Some Early Steps to Check • Secured partnerships? Recruited and retained the right staff? • Attracted target number of participants? • Do Enrolled participants complete (enough of) the program? • Is the program implemented with fidelity? • Do participants show progress on pre/post measures of the program’s short-term outcomes? • Do participants pass external exams? • Do participants find employment in targeted industry? Program details matter; examples have caveats
An Exercise • WIF as the “Vanguard” • Impact Results • Earlier Results • An Exercise
Exercise 2: Motivation • Epstein and Klerman (2012) argue that: • Even w/o an impact evaluation, we can establish that some (many?) programs are unlikely to show impact • Because they don’t “succeed” in the earlier steps of their own logic model • This exercise attempts to help you to be ready to use that insight constructively: • In this grant cycle, and in future grant cycles
Exercise 2: Part 1 • Who: Grantee and evaluator, together • Materials: A copy of your program’s “Logic Model” • Ideally, a figure/graphic and the narrative discussing it • Part 1: Walk through your logic model, identifying each of the verifiable early steps • Inputs acquired • Activities and outputs produced • Outcomes achieved—in the treatment group, during or at the end of treatment
Exercise 2: Parts 2-5 • Part 2: For each step, define “success” • Often defining success will require a quantitative standard(e.g., number of trainees enrolled, percent of classes actually attended) • Part 3: Establish how and how earlyyou can (and will) easily measure “success” for each earlier step • Part 4: If not “successful” at an early step of your logic model, discuss how you will “adjust” your program • During this grant period or in the next grant period • Step 5: If we identify common issues, we can try to arrange technical assistance
For More on these Ideas • Wholey, Joseph 1994. ‘‘Assessing the Feasibility and Likely Usefulness of Evaluation.’’ In Handbook of Practical Program Evaluation, edited by H. P. Hatry, J. S. Wholey, & K. E. Newcomer. San Francisco, CA: Jossey-Bass. • Epstein, Diana and J.A. Klerman. 2012. “When is a Program Ready for Rigorous Impact Evaluation?” Evaluation Review. 36(5): 373-399. • Abt Associates Policy Brief: When is a Social Program Ready for Rigorous Impact Evaluation?
Outline • WIF as the “Vanguard” • Impact Results • Earlier Results • An Exercise