440 likes | 576 Views
To crawl before we run: optimising therapies with aggregated data. Chris Evans, Michael Barkham, John Mellor-Clark, Frank Margison, Janice Connell. Aims. Panel aim is to help bridge the gap between researchers and practitioners
E N D
To crawl before we run:optimising therapies with aggregated data Chris Evans, Michael Barkham, John Mellor-Clark, Frank Margison, Janice Connell
Aims • Panel aim is to help bridge the gap between researchers and practitioners • Specifically, to promote new forms of “practice based evidence” (PBE) which work in and across that gap and which complement EBP • This paper aims to present low sophistication, service oriented methods to complement the HLM and other sophisticated methods that Wolfgang, Zoran and many others have developed
Specific aims for this presentation • Show realities of routine data collection • Show the magnitude of service level variation • Argue that simple service level analyses can help us learn from treatment failures • Computer processing is needed by most services/practitioners but is alien to many, two methods of computer processing available for CORE • For now, confidence intervals and graphical data presentations may be the “zone of proximal development”
The dataset • 6610 records (from >12k): • 33 primary care NHS services • 40 to 932 records per service • Anonymised, voluntary • Four components to the data: • Therapist completed CORE-A • Therapy Assessment Form (TAF) • End of Therapy Form (EOT) • Client completed CORE-OM • At assessment and end of therapy or follow-up
CORE-PC version of CORE-OM “It’s really simple and easy to use. I’m not very computer literate, but I’d got to grips with it in less than an hour”
Getting data: summary • For each of these basic indices the differences across services: • were significant p<.0005 • were very large in magnitude • the number “significantly” different from overall proportion ranged from 15 to 22 of the 33 • Even at the “best” end, datasets are fairly incomplete … • … at the “worst” end completion rate is cripplingly low
Demographics: summary • All differences p<.0005 • Quite large in magnitude • Number “significantly” different from overall proportion/median ranged from 3 to 16 of the 33 • Particularly big differences on ethnicity • Some of these demographic variables will have relationships to outcome and failure both within and between services
Starting points: summary • All statistically significant p<.0005 • Large differences • Number “significantly” different from overall proportion/median ranged from 6 to 10 of the 33 • Again, starting conditions can have relationships with outcome and failures at both individual and service level
Logistics: summary • All p<.0005 • All large differences, particularly for waiting time from referral to assessment (13 days cf. 137 days) • Number “significantly” different from overall proportion/median ranged from 6 to 19 of the 33 • There are big differences on number of sessions offered (medians from 3 to 10) • … but many services offering fixed number, mode is six sessions • Looks very likely that there will be some differences between services in the ways they operate that will hugely affect outcome and failures
Outcomes: summary • All statistically significant p<.0005 • Large differences • Number “significantly” different from overall proportion/median ranged from 4 to 9 • Despite large differences on RC and CSC, number of services differing “significantly” from the overall is not so high (4 and 6 respectively)
Can automation of data processing help bridge the gap? • Neither researchers nor practitioners know much about the generalisability of “strong causal inference” to routine practice • Need practice to come out of the confidentiality closet without harming true confidentiality • Very, very few services currently collect routine outcome data • Few services link with other services to compare practices and data • Few services have strong links to researchers to help understand data • Need to bridge these gaps: if we make data easier to handle it might help!
Automation (1): batch route • Facilitates some distancing from the data • Data analyses done by researchers and experts in analysis and data handling • Reports (30+ pages) well received • Can explore site specific issues
Automation (2): CORE-PC “The clinical and reliable change graph is invaluable. As a service manager it gives me instant access to where we can look to improve our service provision”
Automation (2): CORE-PC “I never realised that writing a report could be so simple, all I need to do is copy the tables I need from CORE-PC, paste them in Word, and write my interpretations.”
Automation (2): PC • Allows services to get much “nearer” to their data • Should prevent some data entry errors • Should increase data completeness • May mean that service clinicians and managers feel uncertain about how to analyse and interpret their data… • … will need training and support
http://www.psyctc.org/stats/Weimar Not until Monday 30.vi.03!