740 likes | 929 Views
Quality of Forensic Reports: An Empirical Investigation of Three Panel Reports. Presented at the Annual Forensic Examiner Training, Honolulu, Hawaii, March 15, 2005. Quality of Forensic Reports: An Empirical Investigation of Three Panel Reports.
E N D
Quality of Forensic Reports:An Empirical Investigation of Three Panel Reports Presented at the Annual Forensic Examiner Training, Honolulu, Hawaii, March 15, 2005.
Quality of Forensic Reports:An Empirical Investigation of Three Panel Reports Marvin W. Acklin Ph.D., Department of Psychiatry, JABSOM Reneau C. Kennedy Ed. D. Adult Mental Health Division Richard Robinson, Argosy University Bria Dunkin, Argosy University Joshua Dwire M.S., Argosy University Brian Lees, Argosy University
Quality of Forensic Reports:An Empirical Investigation of Three Panel Reports • With special assistance from: • The Honorable Marsha J. Waldorf • Judge Waldorf’s Law Clerks • Teresa Morrison • Kirsha Durante • Crystal Mueller, AMHD, Forensic Services Research
Logic • “Studies have uniformly concluded that judges typically defer to the opinions of examiners, with rates of examiner-judge agreement often exceeding 90%. Judges typically rely solely on examiners’ written reports and, hence, the quality of the data and reasoning presented in such reports become a critical part of the CST adjudication process” (Skeem & Golding, 1988, p.357).
Method • We utilized the rationale of Skeem and Golding (Skeem, Golding, Cohn, & Berge, 1998; Skeem & Golding, 1998). We examined factors which reflect the quality of forensic reports. Skeem and Golding identified a number of factors linked to the quality of forensic reports: methods, opinions, and rationale for forensic opinions.
Purpose • To examine a representative sample of three panel reports submitted to the 1st Circuit Court Judiciary. • To assess factors related to quality and reliability.
Method: Study 1 • Sample • We examined 50 felony cases adjudicated in the 1st Circuit Court. • Inclusion criteria included a full set of three panel reports and a finding regarding fitness by the court. • The 50 cases were drawn at random from 416 files stored at the 1st Circuit Court.
Method: Studies 2 & 3 • Utilizing the same selection procedure as the larger study, we examined two subsets of the 416 files. • Three panel examinations prior to a finding of Not Guilty by Reason of Insanity (NGRI: n = 10). • Three panel examinations ordered after a request for Conditional Release (CR: n = 10).
Method: Study 4 • Records from Prosecuting Attorney City & County of Honolulu for 2000-2004
Procedures: Inter-rater Agreement • A coding manual of pertinent items was created and the terminology utilized was refined. • Pre-coding trainings were conducted utilizing three reports to refine the understanding of items in coding manual and to familiarize coders with the formats of the forensic reports. • An inter-rater reliability trial (IRRT) was conducted using five files (15 reports). • The results of the first IRRT were:
Procedure: Inter-rater Agreement • Results of the 1st IRRT were: • Mean kappa .80 • Range .13 to 1.0 kappa range Examiner 1,2 .87 .18 to 1.0 Examiner 1,3 .77 .13 to 1.0 Examiner 1,4 .86 .18 to 1.0 Examiner 2,3 .74 .26 to 1.0 Examiner 2,4 .84 .18 to 1.0 Examiner 3,4 .72 .13 to 1.0
Procedure: Inter-rater Agreement • A second inter-rater reliability trial was conducted (IRRT) using five files (15 reports) to refine coding criteria. • The results of the second IRRT were as follows: • Mean kappa .95 • Range .55 to 1.0
Inter-rater Agreement Coefficients (Second IRRT)5 Cases, 3 Raters
Number of Different Examiners • 32 Examiners
Credential of Examiner N = 416 cases n = 150 reports
Classification of Criminal Offence N = 416 cases n = 150 reports
Case Caption Visible? N = 416 cases n = 150 reports
Charge Visible? N = 416 cases n = 149 reports
Examiner Opinion on Competency to Stand Trial (CST) N = 416 cases n = 150 reports
Examiner Rationale for CST Opinion N = 416 cases n = 150 reports
Mention of Specific Impairment in Relation to CST Opinion N = 416 cases n = 150 reports
Examiner Opinion Concerning Criminal Responsibility N = 416 cases n = 150 reports
Examiner Rationale for Criminal Responsibility Opinion N = 416 cases n = 150 reports
Examiner Opinion Regarding Dangerousness N = 416 cases n = 150 reports
Examiner Rationale for Dangerousness Opinion N = 416 cases n = 150 reports
Examiner Suggestions for Managing Dangerousness/Risk Reduction N = 416 cases n = 150 reports
Evaluation Methods N = 416 cases n = 150 reports
Judicial Determination of CST N = 416 cases n = 150 reports
Ease of Data Extraction N = 416 cases n = 149 reports
Majority/Unanimity Agreement • CST • 58% = 100% agreement • 32% = At least 2 examiners & judge agree • 4% = At least 1 examiner & judge agree • 6% = Examiners agree, Judicial determination differs
Inter-examiner Agreement • CST • Mean kappa .42 • Range .36 to .55 • Responsibility • Mean kappa .41 • Range .39 to .45 • Dangerousness • Mean kappa .25 • Range .14 to .36
Judge-Examiner Agreement • CST • Mean kappa .49 • Range .39 to .60 • Dangerousness • Cases where examiner opinion on dangerousness was not ordered were excluded from calculation. • Judicial Determination on all cases was “No Determination” • Mean kappa .05 • Range .05 to .10
Number of Different Examiners • 18 Examiners
Credential of Examiner N = 416 cases n = 30 reports
Classification of Criminal Offence N = 416 cases n = 30 reports
Examiner Opinion Concerning Criminal Responsibility N = 416 cases n = 30 reports
Examiner Rationale for Criminal Responsibility Opinion N = 416 cases n = 30 reports
Examiner Opinion Regarding Dangerousness N = 416 cases n = 30 reports
Examiner Rationale for Dangerousness Opinion N = 416 cases n = 30 reports
Examiner Suggestions for Managing Dangerousness/Risk Reduction N = 416 cases n = 30 reports
Evaluation Methods N = 416 cases n = 30 reports
Majority/Unanimity Agreement • CST • 70% = 100% agreement • 30% = At least 2 examiners and judge agree • Dangerousness • 20% = 100% agreement • 30% = At least 2 examiners and judge agree • 40% = At least 1 examiner and judge agrees • 10% = Two examiners agree, judge and 3rd examiner opinion differed completely
Inter-examiner Agreement • CST • Mean kappa .61 • Range .46 to .76 • Dangerousness • Cases where opinion on dangerousness was not asked for were not calculated • Mean kappa .17 • Range -.11 to .40
Judge-Examiner Agreement • CST • Mean kappa .79 • Range .60 to 1.0 • Dangerousness • Mean kappa .24 • Range .15 to .24
Number of Different Examiners • 16 Examiners