670 likes | 833 Views
Service Quality Assessment in Research Libraries The Guelph/Waterloo ARL LibQUAL Projects. Mark Haslett Ron MacKinnon Susan Routliffe February 1, 2002. Structure of today’s session. Part 1: History & context of LibQUAL Part 2: Local administration Part 3: Local results Questions.
E N D
Service Quality Assessment in Research LibrariesThe Guelph/Waterloo ARL LibQUAL Projects Mark Haslett Ron MacKinnon Susan Routliffe February 1, 2002
Structure of today’s session • Part 1: History & context of LibQUAL • Part 2: Local administration • Part 3: Local results • Questions
Part 1 History & context of LibQUAL
ARL New MeasuresInitiative • October 1999 Membership Meeting, the ARL New Measures Initiative was established in response to the following two needs: • Increasing demand for libraries to demonstrate outcomes/impacts in areas important to the institution. • Increasing pressure to maximize use of resources - benchmark best practices to save or reallocate resources.
ARL New MeasuresInitiative • Assessing outcomes important to students and faculty • Maximizing access to information resources • Bench marking best practices • Improving services • Reallocating resources
ARL New MeasuresInitiative • Higher education outcomes research review • LibQUAL (measures for library service quality) • Investigation of cost drivers (e.g. technical services cost study) • Development of a self-assisted guide for measuring performance of ILL/DD • E-Metrics (measures for electronic resources)
Some more context • Traditional focus on inputs • “Research libraries have always placed value in describing their… resources & services.” • Strong history of statistical data collection
Research libraries searching for improved measures • Past practice equated use with value and quantitywith quality • Resulted in focus on tonnage • But what about the outcome value for faculty & students?
Research libraries searching for improved measures New strategic objective for ARL: “the need for alternatives to expenditure metrics as measures of library performance…”
We need to listen to our users • In order to help “describe and measure the performance of research libraries …” • Such listening should provide opportunities to: • Develop & revise our services • Use our information resources effectively • Provide for continuous assessment & accountability
Multiple methods of listening • Active listening • Complaints and suggestions • Focus groups • Students & faculty on committees • Web usability studies • Surveys
ARL’s LibQUAL proposal • A web survey instrument • Identify user expectations -- and user perceptions of how they're met • Not a forecasting or predictive tool • Not a ranking tool
LibQUAL goals • Establish a library service quality assessment program at ARL • Develop web-based tools for assessing library service quality • Develop mechanisms and protocols for evaluating libraries • Identify best practices in providing library service
LibQUAL • A research anddevelopment project • Aim is to have a mature web survey instrument within 4 to 5 years • Focus on client expectations and perceptions
LibQUAL’s origins • Based on SERVQUAL survey instrument • Developed in the 1980s for use in the for-profit sector • Utilizes gap theory to measure service quality • “Only the perceptions of the customers matter.”
What does SERVQUAL measure? FIVE “Dimensions” of service as perceived by customers: • Tangibles • Reliability • Responsiveness • Assurance • Empathy
LibQUAL (2000/2001)Nine dimensions of service • Tangibles • Reliability • Responsiveness • Assurance • Empathy • Access to Collections • Library as Place • Self-Reliance • Instruction
LibQUAL start-up • Spring 1999 ARL meeting – ARL decision to engage in “LibQUAL” pilot project with Texas A&M • October 2000 – ARL Symposium on “Measuring Service Quality.”
LibQUAL phases • Phase 1: 1999/2000 • 12 participating libraries • One Canadian – York • Phase 2: 2000/2001 • 43 participating libraries • Three Canadian - Guelph, McGill, Waterloo • Phase 3: 2001/2002 • 171 participating libraries • Four Canadian – Alberta, Calgary, McGill, York
Purpose of LibQUAL phase 2 • Test what was learned in Phase 1 • Increase sample size and diversity • More Canadian universities • Additional questions on, for example, user self-reliance
Benefits of participation in phase 2 • ARL’s collective work • Information about user expectations • Opportunity to identify service areas for further review • Sharing best practices • Experience with this type of survey • Experience analyzing the data
Part 2 Local administration of the survey
Research ethics approval • UW • UG
Survey population • How Many • How Selected
Email addresses • How obtained • Substitutions • Accuracy
Incentive to participate • Project wide incentive • Palm pilot • Local incentives • Gift certificates
Demographic Detail • Population total • Students by discipline and year • Faculty by status
Start and finish dates • March 15 – March 30 • PRAGMATIC FACTORS: • March Break • End of classes • Beginning of Exams
Testing the questionnaire • Why • What we found
Messages to Survey Sample • Four messages • Invitation • URL • Reminders
Responding to questions/comments/complaints • Who • How much time
Nature of questions/comments/complaints • Technical problems • Can’t/won’t respond • Respond later • Already responded • Spam • Survey….
Comments about the survey • Too long • Redundant questions • Rating scale is too broad and not meaningful • Questions are confusing
Comments about the Survey • More questions about collections • No opportunity to provide comments • Endless questions… • Minimal/desired/perceived format not is desirable; prefer strongly disagree/agree format
Comments about the Survey • Poor visual layout, small dots on beige page • Uninviting layout, too dense • Questions didn’t all fit on a screen, needed to constantly scroll back and forth
Comments about the Survey • No way to save a partially completed survey • Total number of questions should have been indicated at the beginning
Comments about the Survey • Should have let respondents indicate which library they were commenting on • Age and sex are never relevant on a survey • Irrelevant questions
Survey administration wrap-up • Summary report to Texas A&M / ARL project team
Part 3 Local results: What we learned
Three areas • Demographic data • Satisfaction data: • Expectations & perceptions • Data models
Area 1: Demographic Data • Good match with known discipline populations • More in common than we thought • May foster collaboration rather than competition
Respondents by Discipline UW UG • Agr/Envl Studies 72 137 • Architecture 5 3 • Business 47 38 • Education 1 2 • Engineering 222 43 • General Studies 1 5 • Health Sciences 43 84 • Humanities 78 50 • Other 69 49 • Performing & Fine Arts 11 10 • Science 289 228 • Social Sciences 124 145 • Undecided 5 0 • Total 967 794
Area 2: Satisfaction Data • Caution • Do not over-interpret the data • Mature methodology • Not a mature instrument (R&D) • Even when “mature”, gaps only indicate probable concern further investigation
Example: When it comes to complete runs of journal titles… My desired service level is 1..2..3..4..5..6..7..8..9 My minimum service level is 1..2..3..4..5..6..7..8..9 My perception of the library’s service performance is 1..2..3..4..5..6..7..8..9
Zone of tolerance Desired level--------------------------------- Zone of Tolerance Minimum level-------------------------------
Above and within the zone of tolerance • We do not exceed the zone of tolerance in any dimension • We are within the zone for most areas: • Assurance • Empathy • Responsiveness • Tangibles • Self-Reliance • Instruction