1 / 31

Online Virtual Chat Library Reference Service: A Quantitative and Qualitative Analysis

Online Virtual Chat Library Reference Service: A Quantitative and Qualitative Analysis. Dr. Dave Harmeyer Associate Professor Chair, Marshburn Memorial Library Azusa Pacific University. Outline. Purpose of the Study Research Questions Methodology Variables (I.V., D.V.)

chaviva
Download Presentation

Online Virtual Chat Library Reference Service: A Quantitative and Qualitative Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Online Virtual Chat Library Reference Service:A Quantitative and Qualitative Analysis Dr. Dave Harmeyer Associate Professor Chair, Marshburn Memorial Library Azusa Pacific University

  2. Outline • Purpose of the Study • Research Questions • Methodology • Variables (I.V., D.V.) • Significant Findings • Conclusions • Questions & Answers

  3. Purpose of the Study • Virtual chat reference augments face-to-face reference interview • Library reference literature lacks research-based findings to back up recommended practices • This study fills the void with a theoretical conceptual model based on an empirical study of chat reference transactions

  4. Research Questions 1. What measurable indicators are found for virtual chat reference transactions, looking exclusively at data created from the chat reference transcripts? 2. Do published reference interview guidelines from RUSA, a set of other strategies and the nature of the query contribute to an accurate answer? 3. What conceptual model of best practices can be suggested by an analysis of the data?

  5. Methodology • Two-and-a-half years of archived academic library chat transcripts using Krippendorff’s (2004) content analysis • 333 random transcripts from 2,500 • Analyzing 16 independent variables and their relationship with one dependent variable of an accurate reference answer • Pearson correlations and ANOVA variance tests • 120 virtual librarians at 43 American institutions • 320 remote patrons accessing the service through one Southern California undergraduate/masters university

  6. Methodology (cont.)

  7. Variables • Research Question 1 answered: • 16 independent variables • 1 dependent variable - question accuracy • Influence on question accuracy • Observed in content analysis of chat transcripts • Derived from RUSA guidelines • Derived from literature review

  8. Quantitative IVs 1. Librarian’s initial contact time (hold time, in seconds) 2. Total time of transaction (service time, in seconds) 3. Longest time gap by librarian (in seconds) 4. Number of URLs co-browsed with the patron 5. Keystrokes by librarian 6. Keystrokes by patron 7. Keystrokes by both

  9. Qualitative IVs 1. The question’s difficulty (seven-point scale) 2. Response to a patron’s “are you there” statements (scored as present, not present, not applicable or ambiguous when coders disagreed) 3. Librarian’s friendliness 4. Lack of jargon 5. Use of open-ended questions 6. Use of closed and/or clarifying questions 7. Librarian maintains objectivity 8. Asking if the question was answered completely 9. The type of question (seven categories: ready reference, research question, library technology, request for materials, bibliographic verification, other and ambiguous for disagreements among coders)

  10. Dependent Variable Coders Qualitative JudgmentsService Quality 8 Librarian gave (or referred) patron to a single source with an accurate answer Excellent 7 Librarian gave (or referred) patron to more than one source, one of which provided an accurate answer Very good 6 Librarian gave (or referred) patron to a single source which does not lead directly to an accurate answer but did serve as a preliminary source Good 5 Librarian gave (or referred) patron to more than one source, none of which leads directly to an accurate answer but one which served as a preliminary source Satisfactory

  11. Dependent Variable (cont.) Coders Qualitative JudgmentsService Quality 4 No direct accurate answer given, referred to another person or institution Fair / poor 3 No accurate answer (or referral) given (e.g., “I don’t know”) Failure 2 Librarian gave (or referred) patron to a single source which did not answer the question Unsatisfactory 1 Librarian gave (or referred) patron to more than one source, none of which answered the question Most unsatisfactory (Richardson and Reyes, 1995)

  12. Significant Findings Summary • Research Question 2 answered: yes • 30 significant relationships (p < .05) • From 9 of 16 variables • 5 found in RUSA guidelines • 4 found in other strategies or nature of online chat

  13. Significant Findings Answer Accuracy Answer Accuracy as Judged by Coders (N=331) Criteria Point Frequency % Cum. % Accurate Answer (single source) Excellent 8.0 88 26.6 26.6 (1/4) Accurate Answer (mult. sources) 7.5 18 5.4 32.0 Very good 7.0 64 19.3 51.3 (1/2) Preliminary Source (single source) 6.5 9 2.7 54.0 Good 6.0 33 10.0 64.0 (2/3) Preliminary Source (mult. sources) 5.5 7 2.1 66.1 Satisfactory 5.0 35 10.6 76.7 (3/4) No Accurate Answer, referred 4.5 11 3.3 80.0 Fair / poor 4.0 57 17.2 97.2 “I don’t know,” no referral 3.5 2 0.6 97.8 Failure 3.0 3 0.9 98.7 Not Accurate (single source) 2.5 2 0.6 99.3 Unsatisfactory 2.0 2 0.6 99.9 Not Accurate (multiple sources) 1.5 0 0.0 99.9 Most unsatisfactory 1.0 0 0.0 99.9

  14. Significant FindingsBest Practices Research Question 3 answered: yes A Conceptual Model for Reference Chat Accuracy minor plus est (less is more) 1. Keep time gaps between sending responses to patrons to no more than one-and-a-half minutes 2. Maintain a total chat transaction time of eight minutes or less 3. Keep total keystrokes per transaction to within six and-a-half lines of text (or 480 characters). 4. Expect to type twice as many characters as the patron

  15. Significant FindingsBest Practices (cont.) 5. Be careful about beginning the question negotiation segment of the reference interview with an open question unless the nature of the patron’s question explicitly calls for one. 6. Ask closed or clarifying questions when appropriate 7. At the end of the reference transaction, ask “Does this completely answer your question?” 8. Even moderately difficult questions decrease answer accuracy and not just the medium to high difficult questions

  16. Significant Findings1. Gaps • Keep time gaps between sending responses to patrons to not much more than one-and-a-half minutes • Reinforces RUSA’s interest guideline (2.6), time away from the patron short, maintain “word contact” (RUSA, June 2004) • Anything nearing two minutes or higher is likely to decrease answer accuracy

  17. Significant Findings 1. Gaps Longest Librarian Gap Quartiles (min.) Acc. Mean Sig. of Diff. (p)_____ 1st 0 -- 1.856.68 2nd 1.87 – 2.835.97.016 (1st & 2nd) 3rd 2.85 – 4.45 6.31 .403 (1st & 3rd, no sig.) 4th 4.47 --6.03 .036 (1st & 4th) Diff=.71 Diff=.71 Diff=.65

  18. Significant Findings: 2. Service Time • Maintain a total chat transaction time of eight minutes or less • Average = 16.0 minutes (n = 331) • 7 minutes more than Richardson’s (2002) 8.9 minutes (n = 20,000) • However, similar to six f2f studies with mean service time ranging from 10 to 20

  19. Significant Findings2. Service Time Service Time of Transactions Quartiles (min.) Accuracy Mean Sig. of Diff. (p) 1st 0 – 8.3 6.82 2nd 8.32 – 13.08 6.02 .005 (1st & 2nd) 3rd 13.1 – 20.75 6.04 .007 (1st & 3rd) 4th 20.77 -- 6.12 .020 (1st & 4th) Diff=.80 Diff=.78 Diff=.70

  20. Significant Findings: 3. Keystrokes • Keep total keystrokes per transaction to within six and-a-half lines of text (or 480 characters) • Application to virtual software vendors (add a timer) • Anything over 15 lines of text will decrease accuracy

  21. Significant Findings3. Keystrokes Keystrokes Keystroke Quartiles Accuracy Mean Sig. of Diff. (p) Librarian 1st 0 – 480 (6.5 lines)* 6.58 4th 1128 (15 lines) -- 5.93 .041 (1st & 4th) Patron 1st 0 – 188 (2.5 lines) 6.65 4th 545 (7.5) -- 5.96 .023 (1st & 4th) Both Librarian & Patron 1st 0 – 690 (9 lines) 6.63 4th 1668 (22.5 lines) -- 5.99 .041 (1st & 4th) *measured at 74 keystrokes per line of text

  22. Significant Findings4. Twice the Typing • Expect to type twice as many characters as the patron • Appeared across all four quartile segments between librarian and patron.

  23. Significant Findings 5. Open-ended Questions • Be careful about beginning the question negotiation segment of the reference interview with an open question unless the nature of the patron’s question explicitly calls for one Frequency of Open-ended Questions Category Frequency Percent Present 112 33 Absent (but should) 75 22.5 Not Applicable 76 22.8 Ambiguous 67 20.1

  24. Significant Findings 5. Open-ended Questions Open-ended Questions Category Accuracy Mean Sig. of Diff. (p) Not Applicable 6.72 Present 5.97 .008 (3 & 1)

  25. Significant Findings: 6. Closed-ended Questions • Ask closed or clarifying questions when appropriate Frequency of Closed-ended and/or Clarifying Questions Category Frequency Percent Present 183 55.0 Absent 48 14.4 Not Applicable 56 16.8 Ambiguous 44 13.2

  26. Significant Findings 6. Closed-ended Questions Closed and/or Clarifying Questions Category Accuracy Mean Sig. Of Diff. (p) 3. Not Applicable 6.69 2. Absent 5.91 .065 (3 & 2, not sig.) Ambiguous filtered 3. Not Applicable 6.69 2. Absent 5.91.040 (3 & 2)

  27. Significant Findings 7. Follow-up Question • At the end of the reference transaction, ask “Does this completely answer your question?” Frequency of the Librarian Asking If the Question Had Been Answered Completely Category Frequency Percent Present 125 37.5 Absent 42 12.6 Not Applicable 108 32.4 Ambiguous 56 16.8

  28. Significant Findings 8. Question Difficulty Question Difficulty Criteria Point Frequency % Cum. % Low 1.0 59 17.7 17.8 1.5 41 12.3 30.2 2.0 66 19.8 50.2 1/2 2.5 55 16.5 66.8 3.0 27 8.1 74.9 3/4 Medium 3.5 26 7.8 82.8 4.0 17 5.1 87.9 4.5 18 5.4 93.4 5.0 6 1.8 95.2 5.5 4 1.2 96.4 6.0 7 6.6% 2.1 98.5 6.5 3 0.9 99.4 High 7.0 2 0.6 100.0

  29. Significant Findings8. Question Difficulty Question Difficulty and Accuracy (reporting only significance) Criteria Points Accuracy Mean Sig. of Diff. (p) Low 1.0 7.24 2.0 6.36 .035 (1.0 & 2.0) 2.5 5.94 .000 (1.0 & 2.5) Medium 3.5 5.42 .000 (1 & 3.5) 4.0 5.53 .001 (1.0 & 4.0) 4.5 5.22 .000 (1.0 & 4.5) 5.0 4.92 .009 (1.0 & 5.0) 5.5 4.0 .001 (1.0 & 5.5) High 6.0 4.83 .006 (1.0 & 6.0) Low 1.5 6.95 2.5 5.94 .041 (1.5 & 2.5) Medium 3.5 5.42 .002 (1.5 & 3.5) 4.0 5.53 .032 (1.5 & 4.0) 4.5 5.22 .001 (1.5 & 4.5) 5.5 4.0 .005 (1.5 & 5.5) High 6.0 4.83 .038 (1.5 & 6.0)

  30. 5. Conclusions • Virtual reference lacks a statistically sound conceptual model to guide the library profession toward improving the reference interview through empirical studies which informs best practices in professional training and assessment. • This study addresses that knowledge void by its discovery of several statistical relationships between nine behavioral factors and an acurate answer in the reference interview. • It is hoped that the suggested eight-point rubric and other results of this project can be a catalyst for practical application toward improving the practice of the global community of professionals and stakeholders in the field of library and information studies.

  31. 6. Questions&Answers

More Related