420 likes | 569 Views
Use of routinely collected service delivery and M&E indicator data for timely feedback. Denis Nash, PhD, MPH Associate Professor of Epidemiology Director, ICAP M&E Unit Mailman School of Public Health, Columbia University, NYC, USA dn2145@columbia.edu.
E N D
Use of routinely collected service delivery and M&E indicator data for timely feedback Denis Nash, PhD, MPH Associate Professor of Epidemiology Director, ICAP M&E Unit Mailman School of Public Health, Columbia University, NYC, USA dn2145@columbia.edu
Common M&E Challenges in scale-up (1) • Large number of sites with relevant info residing with multiple individuals • e.g., sites, districts, partner country teams, partner HQ , etc. • Increasingly complex array of services to report on/evaluate • Collection, management and use of indicator data within country • Traditionally siloed areas of reporting for program activities that are integrated at the site level • e.g., care and treatment, PMTCT, TB/HIV, testing & counseling • Separate M&E reports for each program area • Comprehensive program evaluation? Triangulation? • MOH vs. donor reporting requirements • Many important aspects of implementation and program quality not captured in conventional, routinely collected M&E indicators • Generally M&E systems do not take context into account
Common M&E Challenges in scale-up (2) • Providing timely data processing and feedback of information to implementation staff for program improvement • National-level (i.e., technical and management staff, IPs) • District-level • Site-level (and below) • Program improvement ultimately happens and most often starts at the site level • Integrated data management • Adequate database to house M&E indicator data is essential • Capture/store/process/utilize reported data in a streamlined and efficient way • Dynamic and flexible to accommodate changes in indicators • Data quality • Missing or incomplete data • Incorrect data • Demand for indicators that reflect quality of care/program • M&E indicators do not typically measure quality of care/program
Feeding data back to programs in the form of information • Scale-up and sheer number of sites and geographic spread makes regular and timely feedback challenging, especially at site level • Need for information at multiple levels • For implementation teams at national and district-levels • Which sites to focus scarce mentoring and implementation support resources? • Are efforts to maximize quality of care having an impact? • For site staff • How is our site doing? Where can we improve? • Are our efforts to improve things working? • Difficult to do without some form of automation (e.g., reports) and decentralization of information (e.g., web-based)
Number ofsites by country, as of March 31, 2010 Total number of sites supported by ICAP : 1,219 Number of sites Source: ICAP Site Census, March 2010
Feeding data back to programs in the form of information • Scale-up and sheer number of sites and geographic spread makes regular and timely feedback challenging, especially at site level • Need for information at multiple levels • For implementation teams at national and district-levels • Which sites to focus scarce mentoring and implementation support resources? • Are efforts to maximize quality of care having an impact? • For site staff • How is our site doing? Where can we improve? • Are our efforts to improve things working? • Difficult to do without some form of automation (e.g., reports) and decentralization of information (e.g., web-based)
Examples of feedback tools used by ICAP • Mainly aimed at providing feedback from ICAP-NY to ICAP country teams on reported data • But some tools can also be used to feedback data to district and sites Examples • ICAP URS dashboards and reports • Maps (static and interactive) • PFaCTS reports • Quarterly eUpdate • Patient-level data reports
ICAP patient-level data warehouse elements • Enrollment Table • Basic demographic information • Age • Sex • enrollment date • Prior ARV use • Point of entry • Transfer Visit Table: Visit date, WHO stage, height,weight, Hb, ALT, next scheduled visit date CD4 Table: CD4 test date, CD4 count, CD4 percent ART Table: ART regimen, regimen start & end date, reason(s) for switching ART regimen Medication Table: TB screening date and result, TB medication reason (treatment or prophylaxis) and dates, CTX & fluconazole Pregnancy Table: Visit date, weeks gestation at visit, due date, actual pregnancy end date Status Table: Patient disposition status (dead, transferred, withdrew, LTF, stopped ART, etc) and status date Follow-up data: 1 row per measure per patient Baseline: 1 row Per patient *measures at key points of interest (e.g., enrollment, ART initiation) calculated based on visit dates Databases are anonymized using an automated tool. Data use governed by MOH approved protocols.
Patient-level data feedback reports • Multi-site feedback reports • Combines and compares data across multiple sites • One for adult patients and one for pediatrics patients • Site-specific feedback reports • General feedback report • Summary of information on currently enrolled patients • Standards of care (SOC) report • Quality of care indicators • Reports are: • 100% automated and are in PDF format • generated and shared with sites within two weeks of submission of database • Currently generated in NYC at ICAP HQ • Report generation tools can be deployed, owned, and maintained by MOHs where capacity exists or where it can be developed
Conclusions • Timely feedback and dissemination of routinely collected service data and M&E data is an increasing challenge, especially as the number of sites increases (i.e., scale-up) • National, district, site, IPs • Database tools, automation, and decentralization of information are critical • Improves data quality and utility of information! • Capacity building on interpreting and applying disseminated data to program improvement is needed