230 likes | 478 Views
LibQUAL+ v LibQUAL Lite at the University of Glasgow . Jacqui Dowd Introduction to LibQUAL+ University of Westminster 5 th February 2010. History of LibQUAL+ at the University of Glasgow. Member of the First SCONUL Consortium in 2003, and in 2004, 2005, 2006 And again in Spring 2008
E N D
LibQUAL+ v LibQUAL Lite at the University of Glasgow Jacqui Dowd Introduction to LibQUAL+ University of Westminster 5th February 2010
History of LibQUAL+ at the University of Glasgow • Member of the First SCONUL Consortium in 2003, • and in 2004, 2005, 2006 • And again in Spring 2008 • Winter 2008 LibQUAL Lite Pilot • And Spring 2009 • And 2010
Why Participate LibQUAL+ Lite Beta Testing? Hopefully to • Increase the response rates -Although we have always had a representative sample, the response rates have been consistently below 10% By • Reducing the burden on the respondents -LibQUAL+ requires 97 responses – unreasonable expectation!LibQUAL+ Lite requires only (?) 51 responses
Increase in Response Rates? • LibQUAL+ Lite sample = 6,808 Undergraduates
Increase Valid Survey Yield? • “Typically about half of the people who view the survey tend to submit a complete version of the survey.” • Martha Kyrillidou, Item Sampling in Service Quality Assessment Surveys To Improve Response Rates and Reduce Respondent Burden: The “LibQUAL+ Lite” Randomized Control Trial (RCT) Dissertation, University of Illinois, 2009 • Not at Glasgow! • Glasgow Beta yield: Full 40%, Lite 55%
Reduction in the Respondent Burden? • LibQUAL+ Lite is a survey methodology in which (a) All users answer a few, selected survey questions, but (b) the remaining survey questions are answered ONLY by a randomly-selected subsample of the users. Thus, (a) data are collected on ALL questions, but (b) each user answers FEWER QUESTIONS, thus shortening the required response time. • Martha Kyrillidou, ibidem
Reduction in the Respondent Burden? • Data from the Beta test indicated – • The average completion time of the Lite survey was only 6 minutes 58 seconds as opposed to 10 minutes 59 seconds for the long form • The median completion time of the Lite survey was 5 minutes 2 seconds as opposed to 8 minutes 27 seconds for the long form • Saving respondents: • 4 minutes 1 second on the average completion time • 2 minutes 45 seconds on the median completion time
Reduction in the Respondent Burden? • Glasgow Beta Test • Yes – in so much as fewer responses were required • No – not significantly in terms of the average completion time • Yes – significantly in terms of the median completion time • Average completion time of all Full iterations is 12 minutes 40 seconds compared to 11 minutes 54 seconds for the Beta test (minus 46 seconds) • Median Completion time of all Full iterations is 9 minutes 16 seconds compared to 5 minutes 38 seconds for the Beta Test (minus 3 minutes 38 seconds) • N.B. On average, 5% fewer respondents added comments in the Beta test.
Beta Survey Completion Times • Glasgow Beta Test – 70% Lite
Full v Lite: 22 Core Scores • Lite Perceived Service Level scores < than 2008 & 2009 • Lite Desired & Minimum Service Level scores > 2008 & 2009
Affect of Service Scores • All Lite scores > 2008 & 2009
Affect of Service Scores Item Perceived scores
Information Control scores • Lite Perceived & Minimum scores < 2008 & 2009 • Lite Desired scores > 2008 & 2009
Information Control scores Item Perceived scores
Library As Place Scores • All Lite scores < 2008 & 2009
Library As Place Scores Item Perceived scores
General Satisfaction • Lite score < 2008 & 2009
Information Literacy Outcomes • Lite score > 2008 & < 2009
Affect on Benchmarking Lite scores have a negative effect when benchmarking with other Russell Group Libraries!
The Future Lite or Full? • We want • To increase response rates by reducing the burden on respondents • To continue to benchmark longitudinally & with peers • However, this depend on what protocol we and our peers use in future & what LibQUAL+ offer.
The Future Lite or Full? In the Beta testing, Glasgow was identified as one of four large research libraries participating. “Though score conversion is not needed, there are some circumstances under which score conversion may be more useful for large research libraries that rely heavily on the LibQUAL+ protocol through annual or biennial implementations.” Martha Kyrillidou, ibidem
Lite of Full • Will SCONUL participants agree which protocol to use? • If yes and they agree Lite: • will they also agree the degree of Lite? • will LibQUAL+ provide score conversion for the previous year to enable continuous benchmarking?