200 likes | 281 Views
HL7 Case Study Lessons Learned Implementing Integrated Pathology and Radiology Requests & Results Reporting. HL7 Road Shows March 2009 Carl Adler Integration Architect, WCI Healthcare carl.adler@wcihealthcare.com. Agenda. Background Solution HL7 V2.x instead of V3 Lessons Learned
E N D
HL7 Case StudyLessons Learned Implementing Integrated Pathology and Radiology Requests & Results Reporting HL7 Road Shows March 2009 Carl Adler Integration Architect, WCI Healthcare carl.adler@wcihealthcare.com
Agenda • Background • Solution • HL7 V2.x instead of V3 • Lessons Learned • Role your own? • Q&A
Background • Large London teaching hospitals trust • Approximately 2 Million Requests/Year (Path/Rad) (approx 700 requests/hour peak) • Live on R0 CRS • Pre CRS solution supported R&RR interfaces with WinPath and Rad Centre • Needed CRS PAS – R&RR integration
Solution • There’s nothing “sexy” about integrated message based R&RR • But… without it all the romance of providing high quality patient care fizzles out • More error prone • Less efficient • Also, there’s nothing “sexy” about HL7 V2.x – but friendship usually is the best basis for a relationship
HL7 V2.3 versus V3? • There is a significant amount of activity around the world designing HL7 messages (e.g. CFH and HL7 itself) • HL7 V3 message designers and implementers are able to avail themselves of a world of XML tools • However, Path and Rad systems don’t yet support V3 Orders and Results Messages • ORM/ORU messages are largely mature and well understood The really important issues associated with integration are protocol independent
Lessons Learned – The Really Big Issues • Workflow – Supporting the needs of differing departments and the Business • Defines the integration need and how interfaces are driven • Numbers (episode, accession, order, hospital, etc) • Reference Data • Varies everywhere – fact • Service Management – getting it going and keeping it going
Workflow Analysis • This needs to be the first step in all integration projects: a request isn’t a request isn’t a request • Integration isn’t merely interfacing • Business and clinical processes determine integration requirements • Use Cases are a manifestation of those processes • Workflows are the expression of the use cases • Trigger events are generated during the execution of work flows
Workflow Analysis • HL7 helps because it is a mature standard and much of the information and rules is already covered. • But devil is in local detail • E.G. Microbiology different messaging solution to histology • E.G. Cancel • What does “cancel” an order mean? • When does/can/should it occur? • What impact does it have on the integration layer? • E.G. Cancel versus Discontinue (same questions)
Workflow Analysis – Some Questions • Must a trust change its business to support its own integration requirements? • Are the business drivers for integration aligned with agreed best clinical practice? • For all disciplines? • Do compromises introduced in the design of the integration impact patient safety • This is really about error handling (or the lack of it)
Numbers – Dynamic Identifiers • Every number generated by the PAS to identify some healthcare “thing” has a range, format and, even, sometimes a meaning • Systems communicating these numbers must agree on range, format and meaning • Examples: • Hospital Patient ID • Episode/Encounter number • Order ID • Accession Number
Reference Data – Static Identifiers • Data about data (e.g. test names, “yes”/“no”, etc), typically a code and textual description • Regulatory Scope • National, Cluster and Local • Business process changes (i.e. changing the local name) • Mapping in the integration layer between local and national names • Reference data tied to business/clinical processes • E.G. Bone scan requires contrast media injection but injection isn’t a “Nuke Med” orderable on PAS • RIS catalogue includes injection; messages triggered within context of injection will fail to post on PAS
Reference Data Management • Initial Synchronisation of PAS and downstream system • Test systems, test data and go-live transitions • Reference data updates • All systems must implement at the same time; or • Organisation must be prepared to deal with mismatches until all systems updated Reference data must be managed from the outset
Service Management – Designing for Reliability • SLA’s • Interfaces fail – how quickly do we know? How quickly can we fix? • Queue lengths at peak times – affects delivery of messages • Clinical criticality • How do ordering physicians know order has been placed? • Solution management • Who monitors? • Who fixes? • How?
Service Management – Designing for Reliability • Alerting • Which component “knows” something is wrong • What types of problems generate alerts? • Controlling alert overload • Where do alerts go? • Remedial actions • Simple – try resending • Reference data mismatch – fix and resend • Message corruption – usually source data related – cancel and retry after fixing • Message failure dependencies • What suffers when message fails?
Service Management – Designing for Reliability • Fail safe designs • No such thing – don’t let the perfect get in the way of the good • E.G. map to a safe default if data supplied not in mapping table • End to end views • As much as possible each component is instrumented
Roll Your Own? • Can you implement your own HL7 based integration • Yes, of course. But… • Know your integration requirements early • These sorts of projects can take on a life of their own • Sometimes it is better to outsource in order to ring-fence and share risk • If business practices have to change you can always blame “those damn consultants” • There are relatively few HL7 V2.x tools
Conclusions • The administrative and clinical business of the hospital defines the requirements for integration • HL7 interfaces are one of the mechanisms for implementing integration • HL7 V2.x will exist – for foreseeable future • In order to integrate end-systems must share a common “understanding” of dynamic and static data • What can go wrong will – solution design must account for errors and error recovery • There are people out there who can make it a little easier
Q&A Traditional scientific method has always been, at the very best, 20-20 hindsight. It's good for seeing where you've been. It's good for testing the truth of what you think you know, but it can't tell you where you ought to go Robert Pirsig – Zen and the Art of Motorcycle Maintenance