190 likes | 347 Views
A typology of evaluation methods: results from the pre-workshop survey. Rebecca Taylor 1 , Alison Short 1,2 , Paul Dugdale 1 , Peter Nugus 1,2 , David Greenfield 2 1 Centre for Health Stewardship, ANU 2 Centre for Clinical Governance, AIHI, UNSW. Outline of presentation. Background Aim
E N D
A typology of evaluation methods: results from the pre-workshop survey Rebecca Taylor1, Alison Short1,2,Paul Dugdale1, Peter Nugus1,2, David Greenfield2 1 Centre for Health Stewardship, ANU 2 Centre for Clinical Governance, AIHI, UNSW
Outline of presentation Background Aim Method: How we developed and distributed the survey Findings So what does this mean? Future directions
Background • At last year’s workshop participants reported a lack of clarity about how to evaluate CDSM tools, including what factors should be considered in evaluations • To foster the use of world class evaluations, first need to know: • what evaluations are currently being completed? • are there gaps in the types of evaluations currently completed? • how can evaluation projects be strengthened?
Aim To investigate the types of evaluations conducted around Australia, and how these evaluations are reported to other clinicians and stakeholders
Method • Development of a survey to investigate the types of evaluations conducted around Australia, and how these evaluations are reported to other clinicians and stakeholders • Pilot testing • Distribution of survey to attendees of the ‘Evaluating Chronic Disease Self-Management Tools’ workshop • Descriptive and thematic analysis of quantitative and qualitative survey data
Findings: Demographics Respondent’s profession (n=20) Setting in which respondent works (n=20) – Participants were able to select more than one response *n=number of respondents *n=number of respondents
Findings: Tools evaluated • Home Telemonitoring • Flinders Program • Health Coaching • CENTREd Model for Self-Management Support • cdmNET • My Health My Life Program • Stanford Program • Living Improvement for Everyone (LIFE) – adapted Stanford Program for ATSI people • Intel equipment for care innovations • COACH Program • AQOL • QLD ONI • The Continuous Care Pilot
Findings: Reasons for evaluation *n=number of respondents
Findings: Collaborators *n=number of respondents
Findings: Data used *n=number of respondents
Findings: Outcome of evaluation *n=number of respondents
Findings: Dissemination of findings Outputs per person Average = 3.5 Range = 0 - 31 *n=number of respondents
Findings: Perspectives of evaluation “Evaluation is an essential component of any tool introduced to a service. However, not only from the patient perspective, but how it influences (or not), health professional practice”. (Participant 5)
Findings: Perspectives of evaluation • Seeing evaluation in context, as part of a process • Who participates in the evaluation? • Range of results • Interfacing with clinicians • Need for the right evaluation method for the purpose
Findings: Perspectives of evaluation “It is time to develop evaluation tools that measure the changes in people and relationships that we are seeing every day when we work with these tools. It might not change someone's HbA1c overnight but it might mean they connect with family again or talk to their health professionals more or ask for help before crisis hits”. (Participant 16)
So what does this mean? • When planning service delivery, also plan its evaluation (plan from the beginning) • Use a wide range of evaluation methods and ensure they are used in the appropriate context • Engage all of the stakeholders • Share methods and findings with others • Collaborate with and learn from others working in the field to prevent reinventing the wheel
Thank you Rebecca Taylor, Postdoctoral Research Fellow Rebecca.Taylor@anu.edu.au