470 likes | 668 Views
Forensic Neuropsychology. Scope and Limits of Neuropsychological Testimony June 2, 2014. Class Topics. Description of criminal and civil process Professional and ethical issues for neuro-psychologists and other mental health experts Issues discussions:
E N D
Forensic Neuropsychology Scope and Limits of Neuropsychological Testimony June 2, 2014
Class Topics • Description of criminal and civil process • Professional and ethical issues for neuro-psychologists and other mental health experts • Issues discussions: • Is neuropsychology based on sufficiently sound science to be useful in court? • What methods are appropriate to meet scientific evidence standards? • What can (and can’t) we testify to?
Criminal Proceedings • Detention for probable cause (40-50% certainty) • Booking at police station • Initial hearing 48-72 hrs • Defense motions and discovery • Prima Facie showing – “on first look”, or “on its face”; evidence presented to Grand Jury that leads to indictment • Arraignment – notified of charges, enters plea, bail set • Trial • Disposition • Appeal • Collateral Attack – challenging ruling by filing another case • Dispositional Review • Post-sentence Treatment Hearings
Civil Proceedings • Complaint • Pretrial Motions • Discovery • pretrial depositions • interrogatories • requests to produce • Settlement or Trial • civil jury returns majority verdict • federal court: unanimous unless stipulated beforehand
Common Law and Statutory Duties of Evaluator I: Confidentiality • If you’re an expert • Normal caveats regarding confidentiality do not normally apply; has implications for outcome of evaluation • If you’re a healthcare provider • When defendant raises mental issue, generally forfeits right to doctor-patient privilege, but this is nuanced • Privilege generally applies to the extent to which the session was “intended” to be confidential • What patients should be told (see next slide) • Who is the client?
Common Law and Statutory Duties II: Clarifying Relationships • Must make a clear statement of loyalties and obligations • Must specify: • who has asked you to perform evaluation • purpose of evaluation • who will see report • potential future activities (e.g., deposition, trial testimony) • what is expected of examinee • Should assess patient’s understanding of these issues- in their own words
Common Law and Statutory Duties III: Tarasoff Duty • Tarasoff case review • Duty to warn in the event of dangerousness • General Guidelines: • person in danger must be named or evident (doesn’t apply to overly general threats) • notification of police, not necessarily direct warning of person, is permissible • Three varieties: mandatory, permissible, no law • Question: Do you warn in a non-Tarasoff state? • Variations: warning re HIV/AIDS, genetically transmissible disease?
Common Law and Statutory Duties IV: Freedom of Choice to Participate • Normally a part of clinical service delivery, informed consentis (technically) minimally relevant to forensic evaluations, though I attempt to get it • Consent to participate is patient’s choice, but (s)he should be made aware of consequences of their choice; there may be law-sanctioned consequences • If patient declines: • outline possible sanctions • talk to attorney • advise whether a report will be sent nonetheless • don’t use scare tactics (“you’ll fry for this!”) • Issue: Is patient “compelled” to participate?
Common Law and Statutory Duties V: Invasion of Privacy • Avoid undue invasion of privacy, whatever the context • Avoid two types of intrusions: • avoid eliciting damaging but irrelevantinfo (e.g., eliciting info about sexual behavior that is not relevant to the legal issue) • avoid addressing other forensic issuesnot raised in the referral question (e.g., in determining circumstances surrounding event, uncover information about past unsolved crime) • Consider whether input is “probative” vs. “prejudicial” (Rule 104b FRE)
Common Law and Statutory Duties VI: Records and the Public Domain • Exchange/release of raw data to adversary • “Test data” v. “Test materials” • Test data (9.04) is, “raw and scaled scores, client/patient responses to test questions or stimuli, and psychologists’ notes and recordings conerning client/patient statements and behavior during an examination. Those portions of test materials that include client/patient responses are included in the definition of “test data”. Persuant to a patient/client release, psychologists provide test data to the client/patient or other persons identified in the release. Psychologists may refrain from releasing test data to protect a client/patient or others from substantial harm or misuse or misrepresentation of the data or the test, recognizing that in many instances release of confidential information under these circumstances is regulated by law. • Test materials (9.11) “refers to manuals, instruments, protocols, and test questions or stimuli and does not include test data as defined in Standard 9.04. Psychologists make reasonable efforts to maintain the integrity and security of test materials and other assessment techniques consistent with law and contractual obligations, and in a manner that permits adherence to this Ethics Code.”
Specialty Guidelines for Forensic Psychologists (2008) • Responsibilities • Competence • Diligence • Relationships • Fees • Notification, Assent, Consent, and Informed Consent • Conflicts in Practice • Privacy, Confidentiality and Privilege • Methods and Procedures • Assessment • Professional and Other Public Communications
Specialty Guidelines (cont’d) • Public & Professional Communication • restriction of psychological data to colleagues • treat colleagues with respect • avoid making statements about legal proceedings • present findings fairly; avoid distortion • actively disclose all sources of information • observations, opinions, conclusions must be distinguished from legal facts, opinions, and conclusions
Notes on APA Ethics Code • Principles(aspirational goals) vs. standards (enforceable rules of conduct) • Validity and normative concerns • Documentationshould be appropriate for the assessment task (higher level of specificity typically required in forensic settings?) • Avoid harm: this is complex, since any forensic opinion will involve “harm” to one party; therefore, avoid unnecessary harm
Criticisms of Forensic Psychology (Morse, 1982) • Mental health professionals have less to contributethan is commonly supposed • Most lay people are quite competentto make judgments regarding mental disorder • All mental health law cases involve primarilymoral and social issues, and only secondarily scientific ones • Overreliance on expertspromotes the idea that such moral and social issues are, in fact, scientific ones • Professionals should recognize this difference and should refrain from drawing social and moral conclusions about which they are not expert
Method Skeptic Position (Faust, Ziskin, Hiers, et al.) • Neuropsychological evidence is usually of negligible value in resolving legal issues • Neuropsychology doesn’t map onto the forensic context. Three examples: • brain damage doesn’t directly relate to legal issues such as competency or insanity • clouded objectivity in the adversarial context • scientific knowledge in neuropsychology relates minimally to questions of legal interest
Chris Sege Method skeptic presentation
Admissibility challenges • Fixed v. Flexible Battery issue (Baxter v. Temple, 2005) • Causation testimony • Application of symptom validity science • Unqualified expert s
Amanda Garcia Fixed/flexible battery debate presentation
Fixed v. Flexible Batteries • General acceptance – what percentage of practitioners say they use flexible batteries? • Peer review • Publication • Known (or potentially known) rates of error • Existence of standards • Testing
Chapple v. Ganger (1994) • Fixed v. flexible battery pitted against one another • Court accepted fixed battery and only portions of flexible battery (both were admitted, but greater weight was given to fixed) • Court held that entire reasoning process, not just part of the process upon which the neuropsychologist derives a conclusion, must be scientific (Question: how does the reasoning process for fixed and flexible batteries differ?) • Nothing beats GOOD PRACTICE
Should the fixed vs. flexible battery issue worry you? • Likely not; judges are not journal editors • Key issues • Generally accepted? • Known, or potentially known, error rate? • Peer review? • Existence of standards? • More about error rate on the next slide • Everything I say has an exception! So be prepared! • Key issue: Know your interpretive strategy and its strengths and weaknesses and be able to defend it strongly
HRNB Indices • Average Impairment Rating (AIR) • General Neuropsychological Deficit Scale (GNDS) • Summarizes 42 variables • Establishment of cutoffs for “brain damage” • Considered by many the “gold standard” in fixed battery approach for diagnostic accuracy and efficiency
HRNB v. Flexible on error rate • Main HRNB data: GNDS Scale (Reitan & Wolfson) • Only works for subjects matching the BD and demographic characteristics of Reitan & Wolfson • Available data do not demonstrate a reliable superiority of HRNB over flexible battery
Larrabee et al., 2008 • Compared HRB subtests (Category, TPT, FT, Seashore Rhythm, Speech Sounds) to other variables available in the Halstead-Russell Neuropsychological Evaluation System • SVT-screened (N=101 brain injured, 95 medical controls) • Two separate logistic regressions to predict group
AUC: 86% for AFB 83% for HRB
Rohling Interpretive Method (RIM) • Converting all scores to age-adjusted T-scores as a common metric • Calculating two test battery means as a measure of global functioning (all items, composite items) • Calculating the percentage of tests that fall in the impaired range • Calculating pre-morbid intelligence using regression methods (OPIE) • Conducting one-sample t-tests on each test battery mean to form estimates of pre-morbid general ability
Rohling et al (2003) • Applied RIM to HRNB data (114 patients, 73 ‘pseudoneurological’ controls) that had been used in prior cross-validation research on the GNDS • Found that OTBM, ITBM, %TI had similar sensitivity, specificity, PPV and NPV as traditional HRNB indices • Provides a benchmark against which the results of a study with a flexible battery could be measured Rohling, M.L., Williamson, D.J., Miller, L.S., & Adams, R.L. (2003). Using the Halstead-Reitan Battery to diagnose brain damage: A comparison of the predictive power of traditional techniques to Rohling’s Interpretive Method. The Clinical Neuropsychologist, 17, 531-543.
Criticisms of RIM • Comparing performance in specific cognitive areas to EPGA (estimated premorbid general ability) • Problems in comparing composite scores to a normative mean of 50 (SD of composite doesn’t often equal 10) • Problems in covariance in comparing values of cognitive ability areas within the RIM • Need large-scale co-normed batteries to address these issues Palmer, B.W., Appelbaum, M.I. & Heaton, R.K. (2004). Rohling’s Interpretive Method and inherent limitations on the flexibility of “flexible batteries”. Neuropsychology Review, 14, 171-176.
Jacob Jones Causation testimony presentation
Current State • Most courts conduct fact-based inquiry before accepting opinions regarding physical cause • Courts more sympathetic on causation, particularly for MVA cases • Trial judges generally given “gatekeeper” function
Limitations of Neuropsychological Testimony in FL – mostly of historic value • Executive Car & Truck Leasing v. DeSerio (1985): • Initial trial had led to $1.2M verdict in favor of Deserio • Defendants appealed, contending the the trial judge erred in allowing a CHP to testify that DeSerio had suffered organic brain damage as a result of the accident (cause) • 4th DCA determined that you don’t to be a mdeical doctor to testify to the existence of brain damage, and that medical testimony is not always necessary to show causation • Wording of the ruling suggests that determination must occure case-by-case and must depend on the nature and extent of the psychologist’s knowledge. • Such determinations may affect the WEIGHT give to such testimony, not sheer admissibility
Cont’d • School Board of Broward County v. Cruz • Receded from rulings prohibiting psychologists from testifying as to cause
Daubert Ruling • District court threw out animal testimony because it was “generally accepted” that only human studies would do • Would not admit a re-evaluation of existing data conducted by a plaintiff witness because it had not been “peer reviewed” but prepared only for trial • 9th DCA affirmed; then went to USSC
Daubert Issues • Seen as vehicle by which bogus scientific testimony (so-called “junk science”) could be addressed • 22 amicus briefs submitted (8 for petitioner; 14 for respondent) • One of the key questions is, “is this scientific”?, or, “what constitutes scientific evidence”?
Trilogy* + FRE = • Knowledge Test: expert has “scientific, technical or other specialized knowledge” (what do you know?) • Helpfulness Test: must assist trier of fact (is what you know useful?) • Qualifications Test: expert must have “specialized knowledge, skill, training, or education” (how did you come to know what you know?) *Daubert “Trilogy”: Daubert v. Merrill Dow, 1992; General Electric v. Joiner (1997); Kunho Tire Co. v. Carmichael (1999)
Frye Scientific principle or discovery must be “sufficiently established to have gained general acceptance in the particular field in which it belongs” Pretrial ruling No real systematic test, and no direct test of whether “general acceptance” is correct Daubert Reasoning or methodology underlying testimony must be “scientifically valid” Preliminary ruling based on FRE 4 indicia (some say 6) ‘widespread acceptance’ Peer review Publication Known/potential rates of error Testing Existence of standards Frye v. Daubert compared
Limitations of Neuropsychological Testimony • Testifying to the ‘ultimate legal issue’ – you are not an attorney or a judge! • competency • mental state at offense • damages • Testifying to the physical aspects of brain damage – you are not clairvoyant! • extent of brain damage • prognosis/course – example line of questioning
Dealing with Limitations • Need to be cognizant of both sides of brain-behavior equation • Need to recognize lawyer’s attempts to get you to testify medically • Need to recognizelawyer’s attempts to get you to recognize (defer to) “authoritative” sources • Issue of “instant” causation;emphasize behavioral, rather than physiological aspects of damage (but is this outdated re: 4th DCA?) • Be aware of physical findingsin case
Operating Characteristics of Tests • Sensitivity: If a person truly has a disorder, how likely is the test to pick it up? • Specificity: If a person truly does not have the disorder, how likely is it that the test will be normal? • Positive Predictive Power: Knowing that there is a positive sign, how likely is the person to have the disorder? • Negative Predictive Power: Knowing that there is a normal test result, how likely is the person to be healthy?
Operating Characteristics of Benton TFR at Prevalence = .11 Prevalence = 36/322 = .11 Sensitivity = 19/36 = .53 Specificity = 276/286 = .965 Positive Predictive Power = 19/29 = .66 Negative Predictive Power = 276/293 = .94 Overall Predictive Power = (19+276)/322 = .92
Operating Characteristics of Benton TFR at Prevalence = .02 Prevalence = 6/322 = .02 Sensitivity = 3/6 = .50 Specificity = 305/316 = .965 Positive Predictive Power = 3/14 = .21 Negative Predictive Power = 305/308 = .99 Overall Predictive Power = (3+305)/322 = .92
a b c d Sensitivity = a/(a+c) = 19/36 = .53 Specificity = d/(b+d) = 276/286 = .97 Positive Predictive Power = a/(a+b) = 19/29 = .67 (also known as PTL+) Negative Predictive Power = d/(c+d) = 276/293 = .94 Prevalence = (a+c)/(a+b+c+d) = 36/322 = .11 (pretest probability of d/o) Pre-test Odds = PTP/(1-PTP) = .11/.89 = .12 (.12:1) Post-Test Odds of mTBI given Positive Result = .67/(1-.67) = 2.03 (>10 is strong) Post-Test Odds of mTBI given Negative Result = 1-NPP = .06 Likelihood Ratio of Positive Test = [a/(a+c)]/[b/(b+d)] = Sens/(1-Spec) = .5378/.0346 = 15.10 (15.10:1) Likelihood Ratio of Negative Test = [c/(a+c)]/d/(b+d)] = (1-Sens)/Spec = .47/.97 = .48 (.48:1) Diagnostic Odds Ratio = LR+/LR- = 17.67/.48 = 36.81
Calculating Premorbid Ability • Demographic Regression Equations • Barona • Combined WAIS subtest-Demographic Regression Equations • Oklahoma Premorbid IQ Estimate (OPIE) • Reading Tests • “Hold” Tests • Lezak’s “best performance” methods – don’t use
OPIE-3 • Combines current performance on WAIS-III subtests with demographic variables (age, education, gender, region, ethnicity) • Five variations • OPIE-3(4ST): V, I, MR, PC • OPIE-3(2ST): V, MR • OPIE-3V: V, for use with lateralized NV-deficit patients • OPIE-3P (MR, PC) OPIE-3MR: MR, for use with lateralized V-deficit patients
Guarding Against False Positives: Consistency Analysis • Consistency of results between/within domains • Consistency with known syndromes • example: “hemi-anomia” • Consistency with injury severity • Consistency with other aspects of behavior • e.g. memory abilities during vs. apart from formal testing
Other Professional Practice Issues • Fees, retainers, and payment agreements • “contingency fees” vs. letters of protection • differing fee structure for forensic vs. nonforensic cases? • Allowing forensic work to change your “standard of practice” • Observers, video/audiotape, or court reporters • test selection • Use of technicians (Division 40 Task Force, new CPT codes) • Records release • Relevant principles from Ethics Code • approach to problem