1 / 28

Impact Assessment Monitoring & Evaluation for e-Government Projects

Impact Assessment Monitoring & Evaluation for e-Government Projects. Subhash Bhatnagar. As part of the Capacity Building Workshop under the Joint Economic Research Program (JERP) .

xandy
Download Presentation

Impact Assessment Monitoring & Evaluation for e-Government Projects

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impact Assessment Monitoring & Evaluation for e-Government Projects Subhash Bhatnagar As part of the Capacity Building Workshop under the Joint Economic Research Program (JERP)

  2. This session will focus on the need for assessing impact of e-government projects and will described a methodology of how such an assessment can be carried out. Results of an impact assessment study sponsored b the World Bank will be discussed in detail illustrating the methodology and the value that could be derived from such assessments. Some of the pitfalls that should be avoided in making assessment will be described.

  3. Presentation Structure • Why assess impact? • Learning from past work on assessment • Proposed Methodology • Results from a Bank Study of 8 projects • Study objectives • Projects covered in the study • Analysis of Results • Are investments in eGovernment worthwhile? • Lessons for assessment work

  4. Why Impact Assessment? • To ensure that funds deployed in eGovernment provide commensurate value. • To create a bench mark for future projects to target • To identify successful projects for replication and scaling up • To sharpen goals and targeted benefits for each project under implementation • To make mid course correction for projects under implementation • To learn key determinants of economic, organizational, and social impact from successful and failed projects

  5. Evaluation of Impact: Key Issues • Macro versus Micro Approach- unit of analysis • Assessment from whose perspective? • Dimensions on which impact can be assessed for different stakeholders • Can all costs and benefits be monetized? • How to isolate the effect of ICT use from different interventions ? • Degree of quantification versus qualitative assessment • Measurement issues: sampling, questionnaire design, analysis of internal data, triangulation

  6. Learning from Past Assessments • Variety of approaches have been used-client satisfaction surveys, expert opinion, ethnographic studies • Client satisfaction survey results can vary over time as bench mark changes - need for counterfactuals • Often studies have been done by agencies that may be seen as being interested in showing positive outcome • Lack of credibility of results-different studies of the same project show very different outcomes • Lack of rigor in sampling-results can not be easily generalized • Lack of rigor in controlling for external influence-need for counterfactuals ignored. • Lack of a standard methodology-making it difficult to compare projects • Hardly any projects do a benchmark survey

  7. Critique of Existing Frameworks • Biased towards quantification of short term direct cost savings- quality of service, governance and wider impacts on society not studied. • Conceptual in nature-hardly any frameworks have been applied to assess impact of real projects • Variety in delivery models has not been recognized. Impact is a function of the delivery model and the nature of clients being served • Practical issues of paucity of data have not been taken into account-particularly in a developing country context where baseline surveys are not done and M&E systems are weak

  8. Measurement Framework

  9. Proposed Framework • Focuses on retrospective assessment of e-delivery systems(B2C and B2B) • Balanced approach between case study and quantitative analysis • Recognizes that some part of the value to different stakeholders can not be monetized • Understand how inputs lead to outputs and outcomes in different project contexts • A practical methodology that can be used for designing bench mark surveys, M&E systems and prospective evaluation of projects in countries with various delivery models and paucity of data

  10. Methodology for Assessment • Select mature, wide scope and scale projects of e-delivery of services. • Collect data through structured survey from clients, employees, supervisors using counterfactuals ( for old non computerized delivery and new e-delivery system) • Customize survey instrument to each project, adapt in local language • Data can be collected through Internet survey, face to face interviews and focus groups • Use professional market research agencies with trained investigators for face to face int • Determine sample frame and size so that results can be extrapolated to the entire population (often 300 clients may be sufficient). Select respondents randomly from locations stratified by activity levels and remoteness • Collect data on investments, operating costs, activity levels, revenues, employee strength from agencies. • Develop a case study-organizational context, process reform, change management.

  11. A study sponsored by World Bank Done by Indıan Instıtute of Management Ahmedabad andLondon School of EconomıcsPreliminary Resultsfrom Projects in India

  12. Study Team • Study Coordinator: Subhash Bhatnagar • Indian Institute of Management, Ahmedabad (IIMA) • Subhash Bhatnagar, Rama Rao, Nupur Singh, Ranjan Vaidya, Mousumi Mandal • London School of Economics • Shirin Madon, Matthew Smith • ISG e-Gov Practice Group • Deepak Bhatia, Jiro Tominaga • Sponsors • World Bank,IIMA, Department of IT

  13. Projects of e-delivery of Services • Issue of land titles in Karnataka (Bhoomi): 180 Kiosks, Launched February 2001 (2-01) • Property registration in Karnataka (Kaveri): 230 offices (3-03) • Computerized Treasury (Khajane): 240 locations (11-02) • Property Registration in Andhra Pradesh: AP 400 offices. (11-98) • eSeva center in Andhra Pradesh: 250 locations in 190 towns, Used monthly by 3.5 million citizens (8-01) • e-Procurement in Andhra Pradesh (1-03) • Ahmedabad Municipal Corporation (AMC): 16 Civic Service Centers (9-02) • Inter State Check Posts in Gujarat: 10 locations (3-2000) • e-Procurement in Chile (Comprasnet) • Income Tax on-line in Chile

  14. Dimensions to be Studied to Evaluate Impact • Project context: basic information on the project and its context • Inputs (technology, human capital, financial resources); • Process outcome (reengineered processes, shortened cycle time, improved access to data and analysis, flexibility in reports); • Customer results (service coverage, timeliness and responsiveness, service quality and convenience of access); • Agency outcomes (transparency and accountability, less corruption, administrative efficiency, revenue growth and cost reduction) and • Strategic outcomes (economic growth, poverty reduction and achievement of MDGs). • Organizational processes: institutional arrangements, organizational structure, and other reform initiatives of the Government that might have influenced the outcome for the ICT project.

  15. Profile of Respondents

  16. Improvement Over Manual System

  17. Savings in Cost to CustomersEstimates for entire client population

  18. Projects: Descending Order Of Improvement in Composite Scoreson a 5 point scale

  19. Descending Order Of Post Computerization Composite Scoreon a 5 point scale

  20. Client Perception (Rating on 5 Point Scale in AMC)

  21. Top Four Attributes Desired in the Application

  22. Impact on Agency

  23. Agency: Growth of Tax and Transaction Fee

  24. Economic Viability of ProjectsAgency Perspective

  25. Attitude to e-Government

  26. Preliminary Observations • Overall Impact • Significant positive impact on cost of accessing service • Variability across different service centers of a project • Strong endorsement of e-Government but indirect preference for private participation • Reduced corruption-outcome is mixed and can be fragile • Any type of system break down leads to corruption • Agents play a key role in promoting corruption • Private operators also exhibit rent seeking behavior given an opportunity • Systematizing queues by appointments helps prevent break down • Small improvements in efficiency can trigger major positive change in perception about quality of governance. • Challenges • No established reporting standards for public agencies- In case of treasuries, the AG office has more information on outcome. • What is the bench mark for evaluation-improvement over manual system, rating of computerized system (moving target), or potential? • Measuring what we purport to measure: design of questions, training, pre test, field checks, triangulation • Public agencies are wary of evaluation-difficult to gather data

  27. Questionnaire Design and Survey • Design of analytical reports prior to survey. Often key variables can be missed if the nature of analysis in not thought through prior to the study. • Pre code as many items in the questionnaire as possible. • Consistent coding for scales -representing high versus low or positive versus negative perceptions. • Differently worded questions to measure some key items/ perceptions. • Wording of questions should be appropriate to skill level of interviewer and educational level of respondent. • Local level translation using colloquial terms. • Feedback from pre-testing of questionnaire should be discussed between study team and investigators. The feedback may include: the length of questionnaire, interpretation of each question and degree of difficulty in collecting sensitive data. • Quality of supervision by MR agency is often much worse than specified in the proposal. Assessing the quality of investigators is a good idea. • Involvement of study team during the training of investigators. • Physical supervision by study team of the survey process is a good idea, even if it is done selectively

  28. Establishing Data Validity • Check extreme values in data files for each item and unacceptable values for coded items. • Cross check the data recorded for extreme values in the questionnaire. • Check for abnormally high values of Standard Deviation. • Even though a code is provided for missing values, there can be confusion in missing values and a legitimate value of zero. • Look for logical connections between variables such as travel mode and travel time; bribe paid and corruption. • Poor data quality can often be traced to specific investigators or locations. • Random check for data entry problems by comparing data from questionnaires with print out of data files. • Complete data validity checks before embarking on analysis

More Related