1 / 22

Stability of Job Analysis Findings and Test Plans over Time Calvin C. Hoffman, PhD

Stability of Job Analysis Findings and Test Plans over Time Calvin C. Hoffman, PhD Los Angeles County Sheriff’s Department Presented to PTCSC on April 14 th , 2010 Coauthors: Carlos Valle, Gabrielle Orozco-Atienza, & Chy Tashima. INTRODUCTION.

raquel
Download Presentation

Stability of Job Analysis Findings and Test Plans over Time Calvin C. Hoffman, PhD

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stability of Job Analysis Findings and Test Plans over Time Calvin C. Hoffman, PhD Los Angeles County Sheriff’s Department Presented to PTCSC on April 14th, 2010 Coauthors: Carlos Valle, Gabrielle Orozco-Atienza, & Chy Tashima.

  2. INTRODUCTION • Job analysis (JA) provides foundation for many human resources activities [recruitment, placement, training, compensation, classification, and selection (Gatewood & Feild, 2001)]. • In content validation research, JA is used to minimize the “inferential leaps” in selecting or developing selection instruments.

  3. INTRODUCTION • There is little guidance on how often to re-validate or revisit the validity of selection systems (Bobko, Roth, & Buster, 2005). • Uniform Guidelines (1978) - “There are no absolutes in the area of determining the currency of a validity study.” • SIOP Principles – “…organizations should develop policies requiring periodic review of validity of selection materials and methods.”

  4. INTRODUCTION • Our position is that if “revalidation” is needed, researchers must pay attention to job analysis. • For example: • Changes in duties? • Changes in technology? • HR systems changes? • Changes in required KSAs?

  5. SETTING • Extensive litigation regarding sergeant promotional process. • Consent decree governed all aspects of JA, selection system design, selection system operation, and actual promotions for over 25 years. • Organizational policy requires updating job analysis every five years. • Policy does not consider important factors such as costs and legal context.

  6. SETTING • 2006 Sergeant Exam - Conducted extensive multi-method JA. • 2009 Sergeant Exam – Unsure about need for additional JA given recency of JA data. • SIOP Principles - “The level of detail required of an analysis of work is directly related to its intended use and the availability of information about the work. A less detailed analysis may be sufficient when there is already information descriptive of the work” (p.11).

  7. CURRENT STUDY • Few changes in sergeant job were expected during the three-year span. • Could conclude that new JA is not needed, and reuse the existing 2006 test plan. • Given history of litigation surrounding this exam, the Principles would support conducting an additional JA. • Choosing to err on the side of caution, we performed a slightly abbreviated JA to support 2009 exam.

  8. CURRENT STUDY • Study examined the stability of the JA data over a two-year span. Focuses on the similarity of: • task and KSA ratings by two independent groups of incumbents • the test plans for the written job knowledge test.

  9. METHOD- 2006 JA • Structured JA interviews were conducted on-site with incumbents, along with job observation, facility tours, and “desk observation”. • From these data sources, a work-oriented job analysis questionnaire (JAQ) was drafted consisting of major tasks and KSAs. • JAQ survey (incumbents), SME linkage ratings.

  10. METHOD - 2009 JA • Did not conduct additional JA interviews. • Relied on the existing 2006 JAQ as a starting point for the 2009 JA effort. • Otherwise, followed same process.

  11. METHOD Participants

  12. METHOD • SMEs (2006 N = 13, 2009 N = 10) in both studies performed linkage ratings to establish relationship between task domains and KSA domains using a 4-point relevance scale. • JAQ x linkage ratings data were further reviewed and fine-tuned by SMEs. • JAQ data helped determine relative weight and content of test plans (written test, appraisal of promotability, and structured interview).

  13. RESULTS - TASKS

  14. RESULTS - KSAs

  15. RESULTS – TEST PLAN

  16. DISCUSSION • JA data were highly stable over time, despite significant M differences observed. • Mean task importance ratings (r = .83) • About 1.0 standard deviation larger than meta-analytic findings of intrarater reliability (rate-rerate) of JA ratings data reported by Dierdorff and Wilson (2003), r = .68 (n = 7,392; k = 49) over an average of 6 months. • Mean KSA importance ratings (r = .96) • No comparison could be made because NO estimate of KSA stability over time could be located in literature. • Test plans (r =.85) • Number of items allocated to specific knowledge domains was highly similar.

  17. DISCUSSION • Differences between mean task ratings might be attributable to: • differences in the selection of JAQ respondents • decreased sensitivity in organization regarding sergeant promotional exam (i.e., no new lawsuits!).

  18. DISCUSSION • Findings did not translate into major differences in the test plans resulting from the JA efforts even with: • different SMEs, • different survey respondent selection methods, • significant differences in mean task and KSA ratings.

  19. DISCUSSION • Although five new domains were included in the 2009 test, they were incorporated as Reference questions wherein candidates are provided resource material to answer questions.

  20. CONCLUSION • We considered costs and risks in determining whether to revalidate our selection system. • The greater the number of intervening years between validation studies, the higher the risk the organization assumes. • The shorter the intervals of time between revalidation efforts, the costlier and more burdensome they are for the organization. • We were conservative due to the legal context. Might have followed a different path if guidelines regarding revalidation efforts were clearer.

  21. CONCLUSION • We encourage researchers and practitioners to conduct and share any research findings that might help in creating detailed practice guidelines on revalidation efforts. • More information is needed to close the disconnect between the requirement to maintain currency of validity information and the lack of clear guidance regarding how often is “often enough.”

  22. Questions? • Copies of slides and the conference paper are available: • Email request to: choffma@lasd.org

More Related