1 / 22

By: Robin S. Poston University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

Effective Use of Knowledge Management Systems: A Process Model of Content Ratings And Credibility Indicators. By: Robin S. Poston University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A Presenter: Maged Younan. Knowledge Management Systems.

lundy
Download Presentation

By: Robin S. Poston University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Effective Use of Knowledge Management Systems:A Process Model of Content Ratings And Credibility Indicators By: Robin S. Poston University of Memphis – U.S.A Cheri Speier Michigan State University – U.S.A Presenter: Maged Younan

  2. Knowledge Management Systems • KMSs should facilitate the efficient and effective use of firm’s intellectual resources • KMSs Store a huge amount of information • Corporations investment in KMSs is expected to reach $13 Billion by 2007 • KMS user should be therefore able to locate relevant and high quality content easily and quickly

  3. What is the problem with KMSs? • Have you ever tried to search for information on the internet (or intranet)? • What were the results? How many links / documents did you get? • How many of these were relevant and satisfied your need? • How many included low quality or even incorrect information?

  4. Content Ratings • To solve the problem of KMSs, content ratings were introduced • Content ratings are simply the feedback of the previous visitors to the same document / link...etc • Content Ratings -if valid- should help future knowledge workers (searchers) to evaluate and select proper content quickly and accurately

  5. Are content ratings always valid? • Do content ratings usually reflect the actual content quality? How accurate are they? • Content ratings may not be always valid due to the following reasons: • Lack of user experience (inappropriate context) • Delegation of search tasks to juniors • Subjectivity of rating – bias • Intentional manipulation of rating

  6. Credibility Indicators • Credibility indicators are used to assess the content and/or the rating validity. • Credibility indicators will take may depend on: • No. of raters • Rater expertise • Collaborative filtering

  7. What do we want to study? • Experiment 1: Examined the relationship between rating validity and KMS search and evaluation process • Experiments 2,3 and 4: Examined the moderating effect of credibility indicators

  8. Experiment 1 “Relationship between rating validity and KMS search and evaluation process”

  9. Experiment 1 - Background • On a complex task people usually anchor on inappropriate content • Knowledge workers usually begin with the assumption that available ratings are valid • If rating is not valid, searchers will mislabel high rated content as being of high quality and vice versa

  10. Experiment 1 - Assumptions • Searchers will follow any of the following search and evaluation processes depending on the rating validity: • Anchoring on the content and making no adjustment as the content rating is valid • Anchoring on the content and making no adjustment while the content rating is low in validity • Anchoring on the content and adjusting away as the content rating is low in validity

  11. Experiment 1 - Hypotheses • The following hypotheses are thus generated: • H1: Knowledge workers will implement different search and evaluation processes depending on the validity of content rating • H2a: Anchoring on low quality content but adjusting away from that anchor results in higher decision than not adjusting away from the anchor • H2b: Anchoring on high quality content (and not adjusting away) results in higher decision quality than anchoring on low quality content and adjusting • H2c: Anchoring on low quality content and adjusting results in longer decision time than anchoring on high or low quality content and not adjusting away

  12. Setting The Experiment • 14 different work plans were created and added to the KMS • 3 quality measures were introduced: • Clarity Project steps • Assigning consultant levels to each project step • Availability of senior consultant assignment to special tasks

  13. Setting The Experiment • The 14 work plans varied in quality such that: • 1 plan met all 3 quality criteria • 6 plans met 2 quality criteria • 6 plans met 1 quality criterion • 1 plan did not meet any of the 3 quality criteria • Subjects of the experiment had prior but limited experience with the task domain

  14. Setting The Experiment • A pilot test before the experiment was conducted to ensure that subjects had the ability to differentiate between low and high quality work plans • Work plans were given content ratings as follows No. of Q. Criteria Valid rating Invalid Rating 3 5 1 2 4 2 1 2 4 0 1 5

  15. Experiment 1 – Dependant variables • The response of the experiment done was: • The decision quality ( No. of lines matching with the lines of the work plan with best quality) • The decision time (measured in minutes) • Chi square tests were conducted to ensure that no significant effect exists for age , gender, experience and years in school of the candidates (Subject pool is homogeneous)

  16. Experiment 1 – Interpreting the results • After running the experiment, candidates were divided into three main clusters • Anchoring on high quality content and making no adjustment • Anchoring on low quality content and making no adjustment • Anchoring on low quality content and adjusting away from the anchor

  17. Experiment 1 – Results • Strong relationship was proven between the validity of the rating and whether the subject adjusts away from an initial anchor or not. (This supports Hypothesis 1) • Significant correlation between the time spent and the decision quality was discovered • Hypotheses 2a and 2b were strongly supported • Hypothesis 2C was not supported

  18. Experiment 1 - Results • H1: Knowledge workers will implement different search and evaluation processes depending on the validity of content rating - Supported • H2a: Anchoring on low quality content but adjusting away from that anchor results in higher decision than not adjusting away from the anchor - Supported • H2b: Anchoring on high quality content (and not adjusting away) results in higher decision quality than anchoring on low quality content and adjusting – Supported • H2c: Anchoring on low quality content and adjusting results in longer decision time than anchoring on high or low quality content and not adjusting- Not Supported

  19. Other Experiments • 3 more experiments were conducted to assess the moderating effect of adding credibility indicators. • No. of raters • Rater expertise • Collaborative filtering (Recommending similar content or identifying content that has been used by others having the same context)

  20. Experiment 2,3 and 4 - Hypotheses • The following hypotheses were generated and assessed: • H3a: Given Low validity ratings, knowledge workers will adjust away -from an anchor on low quality content- more when the number of raters is low than when the number is high – NOT Supported • H3b: Given Low validity ratings, knowledge workers will adjust away -from an anchor on low quality content- more when the expertise of rater is low than when the expertise is high – NOT Supported • H3c: Given Low validity ratings, knowledge workers will adjust away -from an anchor on low quality content- more when the collaborative filtering sophistication is low than when it is high – Supported

  21. Conclusions • Results suggest that ratings influence the quality of the decisions taken by knowledge workers (KMS users) • The paper also provides other useful data for KMS designers and knowledge workers; for example the fact that collaborative filtering has a powerful moderating effect than the number of raters and the raters expertise is a new important point. • Future studies assisting individuals in overcoming invalid ratings should be conducted

  22. Thank You

More Related