1 / 90

Quality of CST Evaluations in Hawaii: The Good News About Examiner Training

Quality of CST Evaluations in Hawaii: The Good News About Examiner Training. Richard Robinson, Psy. D. Marvin W. Acklin, Ph.D., ABPP Raymond Folen, Ph. D., ABPP Robert Anderson Jr., PH. D. Quality of Court Ordered CST Evaluations in Hawaii: The Good News About Examiner Training.

les
Download Presentation

Quality of CST Evaluations in Hawaii: The Good News About Examiner Training

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quality of CST Evaluations in Hawaii: The Good News About Examiner Training Richard Robinson, Psy. D. Marvin W. Acklin, Ph.D., ABPP Raymond Folen, Ph. D., ABPP Robert Anderson Jr., PH. D.

  2. Quality of Court Ordered CST Evaluations in Hawaii: The Good News About Examiner Training • With special assistance from • The Honorable Marsha J. Waldorf • Judge Waldorf’s Law Clerks • Teresa Morrison • Kirsha Durante • Joshua Dwire, M.S.

  3. Rationale for Study • Studies in other states have frequently found problems with the quality of the forensic reports. We decided to examine the quality of forensic reports in Hawaii. • This study replicated aspects of other studies that evaluated the quality of reports submitted to the courts. • Florida-Heilbrun & Collins, 1995; Otto, Barnes, & Jacobson, 1996 • Nebraska and New Jersey- Robbins, Waters, & Herbert, 1997 • Okalahoma- Nicholson, LaFortune, Norwood, & Roach, 1995 • Virginia- Heilbrun, Warren, Rosenfeld, & Collins, 1994 • Utah- Skeem, Golding, Cohn & Berg, 1998 • Illinois- Sanschagrin, 2006

  4. Rationale for Study • National Findings • Competency to stand trial (CST) evaluations are the most common type of forensic evaluation • There is concern about the quality of the forensic evaluations being conducted by psychologists • Level of specialized training that psychologists conducting forensic assessments have received may be inadequate (Grisso, 1996) • Forensic psychologists agree certain elements should be included in forensic mental health assessments; however, research indicates that this is not the actual practice

  5. Rationale for the Study • National Findings • Examiners’ reports are very influential in the legal process • Frequently Judges rely solely on the examiners written reports • How the data and the examiner’s findings and reasoning are presented in the reports and the ease with which it can be extracted becomes a major factor in the outcome of the legal process (Heilbrun, 2001). • Quality of the written report may be the most critical part of the evaluation process

  6. Rationale for Study • In 2005, there were 230 felony CST evaluations in Hawaii • There are a total of 56 Certified Forensic Examiners • 62% (N=35) of Certified Forensic Examiners in Hawaii are Psychologists • 11% (N=6) of Certified Forensic Examiners in Hawaii are from Courts & Corrections • 27% (N=20) of Certified Forensic Examiners in Hawaii are Psychiatrists (Kennedy, 2006)

  7. Current Study • Primary objective was to examine the quality of the written reports not the evaluation process. • Reports measured against • Nationally accepted standards • Hawaii standard - an exemplar report issued by the court • APA Ethics Code • Specialty Guidelines for Forensic Psychologists

  8. Methods • Sampling • 150 forensic evaluation reports (50 case files) • Each case files included three forensic evaluation reports and a judicial determination of CST • Evaluations of adults charged with felonies within the First Circuit Court of the State of Hawaii (Oahu). • All reports were drawn at random from the more than 543 case files stored at the First Circuit Court. • Only reports filed since January 2000

  9. Methods • Measures • Major texts in forensic psychology and scientific journal articles were reviewed to develop a survey instrument • The survey instrument developed by Sanschagrin (2006) was the model for the current survey instrument • Instrument modified to better assess the unique statutory requirements of the State of Hawaii • Sample report format offered at the conference as an exemplary report • The survey instrument in this study assessed a comprehensive list of items. Each item was assessed for its presence or absence and for its quality.

  10. Methods • Measures • There were a total of 38 questions on the survey instrument; of those eight questions were demographic in nature and were not used in the calculations towards the total possible score. • Each item scored 0-Absent, 1-Partial, 2- Complete

  11. Methods • Measures • During data collection found “CST only” court orders. • Maximum sores possible depending on the findings of the FMHE • CST only (N=20) 50 • CST & CR (N=53) 54 • IST & Not CR (N=74) 60 • No Opinion (N=3) 46 • QC score calculated

  12. Methods • Items clustered into Elements • Data • Sources of Information • Ethical • Historical • Clinical • Competency to Stand Trial • Criminal Responsibility • Dangerousness • Appendix F

  13. Methods • Procedures • Inter-rater reliability training • Independently rated ten reports using the survey and the scoring criteria. • Mean kappa = .85 • (“almost perfect,” Landis & Koch, 1977) • Reviewed scoring of items for further clarification

  14. Methods • Procedures • After acceptable inter-rater agreement was achieved began rating 150 reports • Every 10th report was scored independently by the author and the research assistant • Mean kappa = .93 • (“almost perfect,” Landis & Koch, 1977) • Ensured no rater drift • Author’s biases and perceptions were not reflected in the scoring

  15. Results • 50 reports by each FMHE group • Charges • 40% (N = 20) Violent Crime • 34% (N = 17) Property/Other Crime • 10% (N = 5) Drug Crime • 8% (N = 4) Violent and Property Crime • 4% (N = 2) Violent and Drug Crime • 4% (N = 2) Property and Drug Crime • 68% (N=34) of defendants determined CST • 32% (N=16) of defendants determined IST

  16. Hypothesis 1 • Mean overall quality scores for forensic assessment reports submitted to the court between the years 2000 and 2006 would score below 80% of the maximum possible score for forensic mental health assessments based upon nationally accepted standards and Hawaii Penal Code.

  17. Hypothesis 1: Results • Confirmed • Scores raged from 22 to 100 • Mean QC score: 68.95 (SD = 15.21) • 25% of reports scored at or above 80% of maximum possible score

  18. Hypothesis 1: Post Hoc • Historical Elements lowered QC scores • Compared reports with Historical Elements included and redacted • Same reports with Hx redacted • Scores raged from 31.7 to 100 • Mean QC score 79.13 (SD = 13.60) • 49% of reports scored at or above 80% of maximum possible score

  19. Hypothesis 1: Post Hoc • Sample: Hx included vs. Hx redacted • Two distributions had equal variances • F = 1.16, p = .280 • Reports w/o Hx were higher quality • t = -6.11, p = < .001, d = .70 • Reports w/o Hx higher percentage of reports above 80% criterion • x2= 17.51, p = < .001

  20. Hypothesis 1: Discussion • Overall poor quality reports • Comparable to other studies • Hx Elements low • Psych Hx & Substance Abuse Hx • Ethical Elements low • Pervasive mediocrity • Most reports did address the legal questions being asked and provided a rationale for their opinion

  21. Hypothesis 2 • Mean overall quality scores for forensic assessment reports would not differ significantly whether they are written by community-based private psychologists or psychiatrists.

  22. Hypothesis 2: Results • Confirmed • Psychologists • Range 35.2 to 96.3 • Mean 65.82 (SD = 15.52) • 20% of reports above 80% criterion • Psychiatrists • Range 53.7 to 100 • Mean 70.46 (SD = 10.96) • 20% of reports above 80% criterion

  23. Hypothesis 2: Results • Psychologists vs. Psychiatrists • Two distributions had unequal Variances • F = 7.28, p = .008 • No difference in report quality • t = -1.72, p = .088, d = .35 • No difference in percentage of reports above 80% criterion • x2= .00, p = 1.0

  24. Hypothesis 2: Discussion • In Hawaii reports are equivalent • Supports recent finding of equality • Disproves beliefs about the quality of psychologist’s work as second rate. • Item level rates are interesting • Possible future research

  25. Hypothesis 3 • Mean overall quality scores of forensic assessment reports written by Courts and Corrections FMHE would be significantly higher than forensic assessment reports written by community-based FMHE.

  26. Hypothesis 3: Results • Not Confirmed • Community-Based FMHE • Range 35.2 to 100 • Mean 68.14 (SD = 13.57) • 20% of reports above 80% criterion • Courts & Corrections FMHE • Range 21.7 to 100 • Mean 70.55 (SD = 18.08) • 36% of reports above 80% criterion

  27. Hypothesis 3: Results • Community-based vs. Courts & Corrections FMHE • Two distributions had unequal Variances • F = 7.65, p = .006 • No difference in report quality • t = -.833, p = .407 d = .15 • Courts & Corrections higher percentage of reports above 80% criterion • x2= .4.51, p = .034

  28. Hypothesis 3 Discussion • Only difference is percentage of reports above 80% criterion • Possible factors • Time/Fee • Receive training • Standardized report format • More proficient

  29. Hypothesis 4 • Mean overall quality scores for forensic assessment reports written by board-certified FMHE would be significantly higher than the mean overall quality scores for reports written by non-board certified FMHE.

  30. Hypothesis 4: Results • Inconclusive • Board Certified • Range 22 to 82 • Mean 54.66 (SD = 16.75) • 11% of reports above 80% criterion • Non Board Certified • Range 35 to 100 • Mean 69.85 (SD = 14.70) • 26% of reports above 80% criterion

  31. Hypothesis 4 Discussion • Due to sampling strategy low N • Because of the low sample size of board-certified FMHE these results are of little value. • Only taken from one FMHE • Three of nine reports “no opinion”

  32. Hypothesis 5 • Mean overall quality scores for forensic assessment reports after the three day forensic examiner training in March 2005 by FMHE who attended the training would be significantly higher than those FMHE who did not attend.

  33. Hypothesis 5: Results • Confirmed* • All FMHE after March 2005 training • Attendees • Range 40 to 100 • Mean 74 (SD = 16.12) • 40% of reports above 80% criterion • Non Attendees • Range 42 to 88 • Mean 64 (SD = 11.84) • 12% of reports above 80% criterion

  34. Hypothesis 5: Results • All FMHE after training: Attendees vs. Non- Attendees • Two distributions had unequal variances • F = 6.65, p = .012 • Attendees reports were higher quality • t = 3.19, p = .002, d = .70 • Attendees reports higher percentage of reports above 80% criterion • x2= 7.51, p = .006

  35. Hypothesis 5: Results • All FMHE before March 2005 training • Attendees • Range 39 to 100 • Mean 69 (SD = 15.11) • 26% of reports above 80% criterion • Non-Attendees • Range 22 to 78 • Mean 59 (SD = 14.51) • None of reports above 80% criterion

  36. Hypothesis 5: Results • All FMHE before March 2005 training: Attendees vs. Non Attendees • Two distributions had equal variances • F = .021, p = .885 • Attendees reports were higher quality • t = 2.20, p = .032, d = .94 • Attendees reports higher percentage of reports above 80% criterion • x2= 6.05, p = .014

  37. Hypothesis 5: Post Hoc Rationale • Two groups differed after training • Two groups differed before training • Decided to look at attendees • Before vs. After • Separated out by groups • All FMHE (entire sample) • Community-based • Courts & Corrections

  38. Hypothesis 5: Post Hoc Results • All FMHE attendees before vs. after March 2005 training • Before • Range 22 to 100 • Mean 67 (SD = 17.49) • 25% of reports above 80% criterion • After • Range 40 to 100 • Mean 74 (SD = 16.12) • 40% of reports above 80% criterion

  39. Hypothesis 5: Post Hoc Results • All FMHE attendees before vs. after March 2005 training • Two distributions had equal variances • F = .137, p = .713 • No difference in reports quality • t = 1.93, p =.057, d = .44 • No difference in percentage of reports above 80% criterion • x2= 2.10, p = .147

  40. Hypothesis 5: Post Hoc Results • Community Based FMHE attendees before vs. after March 2005 training • Before • Range 35 to 82 • Mean 61 (SD = 13.40) • 12% of reports above 80% criterion • After • Range 56 to 100 • Mean 79 (SD = 13.41) • 50% of reports above 80% criterion

  41. Hypothesis 5: Post Hoc Results • Community Based FMHE attendees before vs. after March 2005 training • Two distributions had equal variances • F = .001, p = .971 • After training reports higher quality • t = 4.17, p = < .001, d = 1.35 • After training reports higher percentage of reports above 80% criterion • x2= 6.30, p = .012

  42. Hypothesis 5: Post Hoc Results • Courts & Corrections FMHE attendees before vs. after March 2005 training • Before • Range 22 to 100 • Mean 72 (SD = 19.99) • 38% of reports above 80% criterion • After • Range 40 to 97 • Mean 70 (SD = 16.89) • 34% of reports above 80% criterion

  43. Hypothesis 5: Post Hoc Results • Courts & Corrections FMHE attendees before vs. after March 2005 training • Two distributions had equal variances • F = .175, p = .677 • No difference in reports quality • t = .376, p = .709, d = .10 • No difference in percentage of reports above 80% criterion • x2= .069, p = .793

  44. Hypothesis 5: Post Hoc Rationale • All FMHE attendees before vs. after • No effect • Community-Based attendees before vs. after • Attendees produced better reports • Courts & Corrections attendees before vs. after • No effect • Training had a different effect on the three groups so further analysis by groups was needed to accurately assess the training effect

  45. Hypothesis 5: Post Hoc Results • Community-Based FMHE after March 2005 training: Attendees vs. Non Attendees • Attendees • Range 56 to 100 • Mean 79 (SD = 13.41) • 50% of reports above 80% criterion • Non Attendees • Range 42 to 85 • Mean 63 (SD = 11.15) • 9% of reports above 80% criterion

  46. Hypothesis 5: Post Hoc Results • Community-Based FMHE after March 2005 training: Attendees vs. Non Attendees • Two distributions had equal variances • F = 2.19, p = .145 • Attendees reports were a higher quality • t = 4.88, p = < .001, d = 1.33 • Attendees reports higher percentage of reports above 80% criterion • x2= 11.20, p = .001

  47. Hypothesis 5: Post Hoc Results • Courts & Corrections FMHE after March 2005 training: Attendees vs. Non Attendees • All Courts & Corrections FNHE attended the March 2005 Training so no analysis could be conducted.

  48. Hypothesis 5: Post Hoc Results • All Non Attendees : Before vs. after March 2005 training • Before • Range 45 to 78 • Mean 63 (SD = 9.87) • None of the reports above 80% criterion • After • Range 42 to 88 • Mean 64 (SD = 11.82) • 12% of reports above 80% criterion

  49. Hypothesis 5: Post Hoc Results • All Non Attendees : Before vs. after March 2005 training • Two distributions had equal variances • F = .122, p = .728 • No difference in report quality • t = .227, p = .822, d = .07 • No difference in percentage of reports above 80% criterion • x2= 2.24, p = .134

  50. Hypothesis 5: Post Hoc Results • Community Based Non Attendees : Before vs. after March 2005 training • Before • Range 45 to 78 • Mean 63 (SD = 9.87) • None of reports above 80% criterion • After • Range 42 to 85 • Mean 66 (SD = 11.15) • 9% of reports above 80% criterion

More Related