1.27k likes | 2.4k Views
Disclosures. John Meyers has an interest in the MNB/MNS scoring software. He does not own any of the tests that make up the core battery for the MNB.Ron Miller has no interest in the MNB/MNS. But is a really nice guyChad Grills also has no interest in the MNB/MNS, but is a really nice guy too. .
E N D
1. The Meyers Neuropsychological Battery (MNB) and Neuronet Analysis John E. Meyers, Psy.D., ABN, ABPdN
Ron Miller, Ph.D., Brigham Young University-Hawaii
Chad Grills, Ph.D., Schofield Barracks, HI
2. Disclosures John Meyers has an interest in the MNB/MNS scoring software. He does not own any of the tests that make up the core battery for the MNB.
Ron Miller has no interest in the MNB/MNS. But is a really nice guy
Chad Grills also has no interest in the MNB/MNS, but is a really nice guy too.
3. Out Line for this section Philosophy of MNB
Development of MNB
Norms Development
Sensitivity and Specificity
Internal Validity Checks
LOC Dose Response
CT/MRI Data
Profiles
4. Philosophy of MNB MNB began as a longer battery of tests.
Using Discriminate Function: Selected tests that were able to discriminate Normal vs TBI.
Did original study years ago
5. Philosophy of MNB Goal was to find the best/shortest battery
Sensitive to Brain Injury
Commonly used Tests, that most NPs know
Originally a 6 hour battery cut to 2.5 - 3 hrs
Tests were selected not only for sensitivity but also ease of administration and scoring (i.e., Category vs. WCST)
6. Testing Order for MNB Short WAIS-III
Forced Choice (FC)
Rey Complex Figure (RCFT) - Copy
Animal Naming
3 minute recall of RCFT
COWA
Dichotic Listening
North American Adult Reading (NART)
Sentence Repetition
30 Minute Recall of RCFT
Recognition Trial of RCFT (Break offered)
AVLT
JOL
Boston Naming
Finger Tapping
Finger Localization
Trails A & B
Token Test
AVLT 30 minute Recall
AVLT Recognition Trial
Booklet Category Test
7. Domains used by the MNB Attention/Working Memory:
Digit Span
Forced Choice
Animal Naming
Sentence Rep
AVLT 1
Arithmetic
Processing Speed/Mental Flexibility
Digit Symbol
Dichotic Both
Trails A
Trails B
8. Domains used by the MNB Verbal Reasoning
Similarities
Information
COWA
Dichotic Left
Dichotic Right
Boston Naming
Token Test Visual Reasoning
Picture Completion
Block Design
JOL
Category
RCFT Copy
9. Domains used by the MNB Verbal Memory
AVLT Total
AVLT Immediate
AVLT Delayed
AVLT Recognition
Visual Memory
RCFT Immediate
RCFT Delayed
RCFT Recognition
10. Domains used by the MNB Motor and Sensory
Finger Tapping Dominant Hand
Finger Tapping Non-Dominant Hand
Finger Localization Dominant Hand
Finger Localization Non-Dominant Hand
11. WAIS-III or WAIS-IV Picture Completion
Digit Symbol
Similarities
Block Design
Arithmetic
Digit Span
Information
Ward 7 Subtest (Pilgrim, Meyers, Bayless, & Whetstone, 1999)
12. Domains Assessed Attention/Concentration/Working Memory
Processing Speed and Mental Flexibility
Verbal Reasoning (Executive ?)
Visual Reasoning (Executive ?)
Verbal Memory and New Learning
Visual Memory and New Learning
Dominant Motor and Sensory
Non-Dominant Motor and Sensory
13. Domain Consistency N = 936
Passed all validity checks
No missing data
Not involved in litigation
Calculated Domain M’s
Regression used to predict Domain M’s using all on other Domain M’s
14. Domain Means Correlations 1 2 3 4 5 6
1 – Premorbid .76 .71 .62 .56 .79
2 - OTBM .76 .98 .81 .82 .84
3 - DTBM .71 .98 .77 .79 .78
4 - Attent/Work Mem .64 .81 .77 .64 .69
5 – Pro Spd/Mental Flex .62 .82 .79 .64 .72
6 - Verbal Reason .79 .84 .78 .69 .72
7 - Visual Reason .68 .81 .81 .54 .64 .64
8 - Verbal Memory .53 .77 .78 .68 .50 .54
9 - Visual Memory .54 .77 .80 .53 .55 .55
10 - Dom Motor/Sensory .30 .54 .62 .37 .44 .36
11 - Nond Motor/Sensory .28 .53 .62 .31 .44 .30
All were Significant p < .001
15. Domain M’s Correlations (cont.) 7 8 9 10 11
1 - Premorbid .68 .53 .54 .30 .28
2 - OTBM .81 .77 .77 .54 .53
3 - DTBM .81 .78 .80 .62 .62
4 - Attent/Work Mem .54 .68 .53 .37 .31
5 - ProcSpd/Ment Flex.64 .50 .55 .44 .44
6 - Verbal Reasoning .64 .54 .55 .36 .30
7 - Visual Reasoning .51 .70 .41 .45
8 - Verbal Memory .51 .62 .34 .32
9 - Visual Memory .70 .62 .37 .40
10 - Dom Motor/Sen .41 .34 .37 .53
11 - Nond Motor/Sen .45 .32 .40 .53
All were Significant p < .001
16. Domains Regression Equations
Attention & Working Memory
(Verbal Reasoning) * .315
(Verbal Memory) * .273
(Processing Speed) * .193
Constant = 10.972
17. Domains Regression Equations Processing Speed/ Mental Flexibility
Verbal Reasoning * .401
Visual Reasoning * .284
Attention & Working Memory * .230
Constant = 2.434
18. Domains Regression Equations Verbal Reasoning
Processing Speed * .361
Attention & Working Memory * .354
Visual Reasoning * .243
Constant = 2.5
19. Domains Regression Equations Visual Reasoning
Visual Memory * .322
Processing Speed/Mental Flexibility * .213
Verbal Reasoning * .208
Constant = 11.813
20. Domains Regression Equations Verbal Memory
Attention & Working Memory * .738
Visual Memory * .388
Constant = -7.615
21. Domains Regression Equations Visual Memory
Visual Reasoning * .698
Verbal Memory * .311
Processing Speed * .0909
Constant = -5.517
22. Regression Adjusted SE
R R2 of the Estimate
Attent/Working Memory .79 .63 4.88
Processing Speed .77 .60 5.31
Verbal Reasoning .80 .64 5.04
Visual Reasoning .78 .61 4.88
Verbal Memory .75 .56 7.96
Visual Memory .77 .59 7.11
23. Core Battery of Testsby publisher AudiTech (St Louis)
Dichotic Listening
Psy Corp
WAIS-III/IV, MMPI-RF
Psychological Assessment Recourses (PAR)
RCFT, Boston Naming, JLO, Category
Public Domain
Forced Choice, COWA, ANIMAL Naming, Sentence Repetition, Token Test, Trails A/B, Finger Localization, 1min estimation, AVLT
Reitan Labs
Finger Tapping
24. AYOC/AYOP You can add 1000 of your own favorite tests to the cognitive section
AND
1000 tests to the Psychological section
The MNB is a core of tests, that has the database comparison and Discriminant functions and Neuronet
25. MNB Smoothed Normative Data In evaluating the norms, note there were variations in test norms, apparently due to age, & education.
For example, AVLT norms Spreen & Strauss (1998)
At Age = 30-39; M = 11.4 (sd = 2.4) for Trial 6
At Age = 40; M = 10.4 (sd = 2.7) for Trial 6.
Therefore, pt. scoring 10 a day before b-day, after b-day, score (i.e. 10) would improve from 44T to 48T, using a linear.
26. MNB Smoothed Normative Data Using Heaton et al. (1991) classification system, pts’ score would improve from the Below Average to Average just by becoming a day older.
A common problem with non-smoothed normative data.
27. MNB Normative Data Therefore, decided to smooth the norms.
Done by selecting all pts. from dataset who:
had a validity score of 0 or 1 (failures)
were age 15 years or older
15 yrs old used for the adult version of the Trail Making Test
this was done to keep consistency.
The total sample size N = 1727
Age: M = 45.7 yrs (sd = 20.7)
Education: 12.3 yrs (sd = 2.7) year of education.
Gender: 779 females; 948 males
Handedness: 1543 were RH and 184 were LH.
Ethnic: 32 mixed; 22 African Americians; 1617 white; 2 Asian; 27 Native American; and 27 Hispanics.
28. MNB Normative Data A Regression equation was then calculated using the raw score and the variables, age, education, gender, handedness, and race to predict the T score previously calculated using the standard normative data for the tests.
29. MNB Regression Norms Not only does process smooth the data
Also adds adjustments for age, education, gender, handedness, and ethnicity.
In Normals these variables may not always be significant. In injured group variables take on additional impact on test performance.
30. MNB Normative Data Once the regression equations were calculated they were used
to calculate a Regression T score for each test
It was found that this procedure worked well for all test variables except Token Test (adult) due to excessive skew
For Token Test, percentile scores were calculated and converted to T score equivalents.
31. MNB Normative Data With Regression Equations, with a raw score 10 on AVLT Imm. Recall
In the example used, would change the data (for person tested 1 day before b’day), at age 39
T Score equivalent would be 45T; a day after her b’day 45T.
Using the regression equation normative data, comparisons can be more reliable made over time. The individual subtests for the WAIS-III or WISC-III were not subjected to the Regression Equation method as only the Scale Scores were coded in the database, not raw scores. Therefore, the scores for these tests are based on normative data from the test manuals.
32. MNB Normative DataAdults Scale R R2 Significance Paired Samples t Test
Trails A .902 .814 .000 .000 (1363), p=1.00
Trails B .873 .762 .000 -.088 (1354), p=.930
Judgment .946 .894 .000 -.099 (1263), p=.921
Finger Tapping DH .961 .924 .000 -.079 (1599), p=.937
Finger Tapping NDH .952 .906 .000 .029 (1577),p=.977
Finger Localization DH .874 .764 .000 .027 (1201), p=.979
Finger Localization NDH .801 .641 .000 -.017 (1196), p=.987
*Token Test .486 .236 .000 .008 (1534),p=.993
Sentence Repetition .957 .915 .000 .040 (1253),p=.968
Controlled Oral Word Association .977 .955 .000 -.022 (1487),p=.982
Animal Naming .976 .953 .000 .099 (1366), p=.921
Boston Naming .902 .814 .000 72.900 (1312), p=.000
Dichotic Listening Left .887 .787 .000 -3.994 (1198), p=.000
Dichotic Listening Right .891 .794 .000 -2.460 (1198), p=.014
Dichotic Listening Both .921 .849 .000 -2.920 (1198), p=004
33. MNB Normative DataAdults Forced Choice .992 .984 .000 -1.065 (1131), p=.287
AVLT 1 .939 .883 .000 .034 (1470), p=.973
AVLT 2 .949 .901 .000 .076 (1470), p=.940
AVLT 3 .959 .920 .000 -.178 (1470), p=.859
AVLT 4 .939 .882 .000 -.024 (1469), p=.981
AVLT 5 .934 .872 .000 -.008 (1469),p=.993
AVLT Total .941 .886 .000 .064 (1470), p=.949
AVLT Distractor .933 .871 .000 .057 (1467), p=.955
AVLT Immediate .957 .915 .000 .103 (1468), p=.918
AVLT Delayed .957 .915 .000 -.071 (1470), p=943
AVLT Recognition .892 .796 .000 -.015 (1470), p=.988
CFT Time .927 .859 .000 -.075 (1657), p=.941
CFT Copy .879 .773 .000 -.053 (1660), p=.958
CFT Immediate .964 .930 .000 -.077 (1658), p=.938
CFT Delayed .966 .934 .000 -.068 (1659), p=.946
CFT False Positive .811 .657 .000 -.027 (1657), p=.979
CFT False Negative .993 .985 .000 -.056 (1657), p=.956
CFT Recognition .938 .879 .000 .005 (1658), p=.996
Booklet Category (Victoria Version) .883 .780 .000 .024 (1290), p=981
* Because of the skewedness of the data percentile scores were computed and transformed to T Scores for this test.
34. MNB Data Children Child Regression Equation R R2 Significance Paired Samples t Test
Trails A .765 .585 .000 .001(99),p=1.000
Trails B .839 .704 .000 -.050(99),p=.960
Judgment of Line .944 .891 .000 .072(96),p=.942
Finger Tapping Dom .913 .833 .000 .017(106),p=.986
Finger Tapping NonDom .916 .840 .000 -.016(106),p=.987
Finger Localization Dom .953 .908 .000 .109 (95), p=.914
Finger Localization NonDom .895 .801 .000 .136 (95), p=.892
Token Test .847 .718 .000 .229 (106), p=.819
Sentence Repetition .956 .914 .000 .088 (105), p=.930
Controlled Oral Word Associat .930 .865 .000 .-1.515 (109), p=.133
Animal Naming .953 .908 .000 .113 (100), p=.910
Boston Naming .855 .731 .000 -.986 (105), p=326
Dichotic Listening Left .946 .894 .000 -.748 (99), p=.457
Dichotic Listening Right .862 .743 .000 -.538 (99), p=.592
Dichotic Listening Both .937 .878 .000 -.052 (99), p=.959
35. MNB Data Children Forced Choice .996 .992 .000 -3.089 (94), p=.003
AVLT 1 .996 .991 .000 .183 (111), p=.855
AVLT 2 .998 .996 .000 -.457 (111), p=.648
AVLT 3 .995 .990 .000 -.295 (111), p=.768
AVLT 4 .998 .995 .000 .483 (111), p=.630
AVLT 5 .919 .845 .000 -.022 (111), p=.983
AVLT Total .932 .869 .000 .047 (111), p=.963
AVLT Distractor .903 .816 .000 .066 (111),p=.948
AVLT Immediate .903 .815 .000 .097 (111),p=.923
AVLT Delayed .959 .919 .000 .143 (111),p=.887
AVLT Recognition .868 .755 .000 -.041 (111),p=.968
CFT Time .868 .754 .000 .067 (111),p=.947
CFT Copy .866 .750 .000 .000 (111),p=1.000
CFT Immediate .897 .805 .000 .015 (111),p=.988
CFT Delayed .970 .941 .000 -.028 (111),p=.977
CFT False Positive .886 .784 .000 -.766 (111),p=.446
CFT False Negatives .997 .995 .000 .303 (111),p=.762
CFT Recognition .955 .913 .000 .000 (111),p=1.000
Booklet Category .620 .384 .000 .000 (92),p=1.000
36. MNB Recap Step 1. Took battery of well known NP Tests
Tests with which most clinicians would be familiar
Tests selected based on utility, ease of scoring, and to assess wide array of cognitive functions
This battery is the result several preliminary batteries
37. MNB Recap Continued Step 2. Large database of pts. collected
Step 3. Examined results for need smooth
Step 4. Data smoothed across battery
ages ranged from 6 – 99 years old
Separate norms for 6 - 14 and 15 - 99
Adjust for age, ed, gender, ethnicity & handed
38. MNB Recap Continued Step 4. Recalculate database with new norms (Step 3)
Now on the Step 5
Is this battery of tests valid?
39. MNB Step 5: Is this battery valid? Need to examine Reliability/Validity MSB
Meyers, J. E., & Rohling, M. L. (2004). Validation of the Meyers Short Battery on Mild TBI patients. Archives of Clinical Neuropsychology, 19, 637-651.
Study included 4 Groups
40. Validity of MNB 30 Medical Controls (Group 1)
in hospital for non CNS problem (i.e. ingrown toe nails)
All community dwelling
No Hx of LD, DD, Substance abuse, TBI, or Mental Health problem, or anything that would disqualify as Normal.
41. Validity of MNB: 30 Medical Controls (cont.) Mean Age: 38.6 yrs (sd = 18.9)
Mean Educ: 13.4 yrs (sd = 3.19)
Gender: 15 females; 15 males
Handedness: 29 were RH; 1 was LH
Ethnicity: 29 white; 1 Native American
42. Validity of MNB Depressed Group (Group 2) 41 patients
All on SSRI
Mean Age: 46.0 yrs (sd = 15.0)
Mean Education: 13.5 yrs (sd = 2.7)
Gender: 20 females; 21 males
Handedness: 38 were RH; 3 were LH
Ethnicity: 1 mixed race; 40 white
29 completed MMPI-2 with M scores as follows
L = 52.1 (sd=11.4), F = 60.5 (sd=11.7), K = 50.2 (sd=10.2)
1 = 63.8 (sd = 12.8)
2 = 70.8 (sd =14.5)
3 = 66.7 (sd = 16.0)
43. Validity of MNB Chronic Pain (Group 3) comprised of 32 pts who were being treated as outpt. for chronic pain.
None involved in litigation at time of assessment
None had previous litigation histories
Pts. were injured in non-work-related injuries or were injured on their own farms, or had chosen not to pursue Workman’s compensation and were being treated at an outpatient pain clinic.
44. Validity of the MNBChronic Pain Group Continued Mean Age: 40.7 yrs (sd = 14.2)
Mean Education: 13.4 yrs (sd = 2.1)
Gender: 20 females and 12 males
Handedness: 29 were RH; 3 were LH
Ethnicity: 31 white; 1 Native American
45. Validity of MNB Group 4: 59 pts. history of TBI
All pts. seen at local hospital and rehab unit
followed by the senior author (JEM)
All pts. had identified LOC 20 min. or less
other data (i.e., GCS or PTA) not often recorded
however, LOC data available for all pts.
LOC defined as “Time to Follow Commands”
(e.g., Dikmen et al., 1995; Volbrecht et al., 2000)
46. Validity of MNB Mean Age: 36.9 yrs (sd = 15.1)
Mean Education: 12.6 yrs (sd = 2.1)
Time Post Injury: 7.6 mo. (sd = 10.0)
Gender: 14 females; 43 males
Handedness: 51 were RH; 6 were LH
Ethnicity: 2 mixed; 1 Hispanic; 54 white
47. Validity of MSB Test scores obtained for each of the study groups
Normal Chronic Mild
Controls Depressed Pain TBI
NART FSIQ Mean 108.03 105.03 103.71 98.45
n 29 40 31 51
SD 8.34 8.57 8.03 6.02
Barona et. al FSIQ Mean 105.63 105.61 106.25 103.74
n 30 41 32 57
SD 7.07 7.12 6.57 6.21
WAIS VIQ Mean 104.97 103.15 102.28 92.45
n 30 41 32 56
SD 9.36 12.86 11.17 9.87
WAIS PIQ Mean 107.93 100.22 107.59 96.80
n 30 41 32 55
SD 11.55 13.24 13.04 10.50
WAIS FIQ Mean 106.53 101.56 105.19 94.18
n 30 41 32 55
SD 8.43 11.04 10.78 9.15
48. Validity of MNB Validity assessed using a discriminant function analysis comparing Non-TBI pts. with the TBI pts.
Resulted: 96% correct classification rate
99% specificity
90% sensitivity
49. Reliability of MNB Reflecting a general clinical sample,
63 persons with mixed diagnoses assessed more than once, with the first testing at least 6 mo. post injury
Some in litigation, but all passed validity checks
Group descriptive
Age: Mean = 38.4 yrs (sd = 22.8)
Education: Mean = 12.2 yrs (sd = 2.9)
50. Test Re-test Reliability 1st Test: Post Injury 21.6 mo. (sd = 22.8)
Re-test: Post Injury 40.7 mo. (sd = 33.2)
Time btwn Sessions: 19.1 mo. (sd = 16.6)
range 2 to 91 mo., median 13 mo.
Reliability r = .86
51. Volbrecht and Meyers (2000) was able to somewhat identify different levels of TBI injury severity from stroke patients with correct classification rate of 71.6%. The majority of the misclassifidcations were in discriminating severely injured individuals from severe stroke patients.
52. Rohling, Meyers, and Millis (2003) found that the length of loss of consciousness (LOC) for traumatic brain injured (TBI) patients was related to the level of expected cognitive impairment as measured by the overall test battery mean (OTBM.
Results were nearly identical to those presented by Dikmen, Machamer, Winn, & Temkin (1995) using an expanded Halstead Reitan Battery.
Core Battery of tests was same sensitivity as longer Expanded HRB
53. Internal Validity Checks Meyers, J. E., & Volbrecht, M. E. (2003). A Validation of Multiple Malingering Detection Methods in a Large Clinical Sample, Archives of Clinical Neuropsychology, 18, 261-276.
Other publications
54. Internal Validity Check (0%FP Rate cutoff) Test/Method Cutoff
RCFT: MEP <= 3 (1= Attent, 2=Encode, 3=Store, 4= Retrieve)
Reliable Digits <= 6
Forced Choice <= 10
JOL <= 12
Token Test <= 150
Dichotic Listening Both <= 9
Sentence Repetition <= 9
AVLT-Recognition <= 9
FT-Estimated FT <= -10
55. Internal Validity Checks A total of 796 participants in the study, ages ranged from 16 yrs to 86 yrs, with education ranging from 5 yrs to 23 yrs.
56. Internal Validity Checks 15 Groups
Non-litigant groups
Litigant groups
57. Internal Validity Check This method showed 83% sensitivity and 100% specificity. Also, there was a 0% false positive rate.
58. Validity of Neuropsychological Tests 9 validity checks used (Combination of studies)
Meyers, J. E. & Volbrecht, M. E. (2003). A Validation of Multiple Malingering Detection Methods in a Large Clinical Sample, Archives of Clinical Neuropsychology, 18, 3, 261-276.
Meyers, J. E., & Diep, A. (2000). Assessment of malingering in chronic pain patients using neuropsychological tests. Applied Neuropsychology, 7, 133-139.
Meyers, J. E., & Volbrecht, M. (1999). Detection of malingerers using the Rey Complex Figure and Recognition Trial. Applied Neuropsychology, 6, 4, 201-207.
Meyers, J. E., Galinsky, A., & Volbrecht, M. (1999). Malingering and mild brain injury: How low is too low. Applied Neuropsychology, 6, 208-216.
Meyers, J. E., & Volbrecht, M. (1998). Validation of reliable digits for detection of malingering. Assessment, 5, 301-305.
Meyers, J. E., & Morrison, A. L., & Miller, J. C. (2001). How low is too low revisited: Sentence repetition and AVLT Recognition in the detection of malingering. (Submitted to Applied Neuropsychology).
Meyers, J. E., & Volbrecht, M. E. (2001). A validation of multiple malingering detection methods in a large clinical sample. (under review at Archives of Clinical Neuropsychology)
59. Validity checks for Neuropsychological tests
60. Validity checks for Neuropsychological tests
61. TBI Dose Response
66. Dikmen et al., (1995) administered HRB to a sample of TBI patients. Similar patients from MNB.
First, determine if a dose-response TBI severity & deficits
Second, are Dikmen et al. results generalizable to other TBI samples?
Analyses of the Meyers sample replicated Dikmen.
A dose-response relationship between LOC & impairment found using effect sizes for Dikmen sample, as well as using regression-based normative T-scores for Meyers sample.
Both samples highly correlated with one another.
Mean scores for the six LOC-severity groups for two samples resulted in a correlation coefficient r = .97, p < .0001. TBI Dose Response
67. CT/MRI Data Participant Demographic Information
Variable Sample Sizes (N = 124)
Gender
Male 82
Female 42
Ethnicity
Caucasian 119
Other 5
68. CT/MRI Diagnostic Groups Sample Size
MVA/TBI 47
Blow to Head 32
LCVA 24
RCVA 21
69. CT/MRI All were Right Handed.
All were followed by Dr. Meyers through hospitalization and rehabilitation.
None were involved in litigation.
All passed internal validity checks.
70. CT/MRI CT/MRI Location
Left Frontal 59
Left Parietal 37
Left Temporal 34
Left Occipital 6
Right Frontal 40
Right Parietal 42
Right Temporal 31
Right Occipital 3
71. CT/MRI All were given MNB
CT/MRI data coded for injury reported on MRI/CT at the time of injury
Present = 1
Absent = 0
Some had injury in more than one place
72. Meyers and Rohling (2009) found an 84 % concordance with CT/MRI data.
78. CT/MRI NP tests generally behaved as expected
A more “Systemic” or “Domain” like approach better at explaining results
Construct of “Frontal-Executive Function” not supported.
79. Comparison Database Current Database 8000+
Can be used to compare your patient’s performance to a reference group(s)
Dicriminant functions
80. Profile Matching Look at the shape of the pattern, not necessarily the level of scores.
Similar conditions have similar patterns
This helps the clinician to “Hypothisis Test” for the DX.
81. Patterns Similar injuries have similar patterns
2 examples
TBI different levels of injury
Hypoixia – Carbon Monoxide
83. Commonality of Reduced O2
84. Discriminant Functions Individual discriminant functions can also help clinician to hypothesis test
If it is TBI should generally match TBI groups.
Normal VS TBI
Depressed VS TBI
PTSD VS TBI
Etc.
85. Neuronet Can help clinician to decide what pattern is more similar to patient’s data
Objectively matches
Must assume patient’s data is one of the comparison groups
We will discuss more about Neuronet a little later.
86. Interpretation Hypothisis test
If it is a TBI what else should it match to?
Should match other TBI patterns
Does pts data make sense given reported injury.
Consider Dose Article
Pattern tells you what the problem is.
87. Rim MatrixRohling Interpretive Method
88. Interpretation Pattern tells you what it is.
OTBM tells you how bad it is.
Domain scores tell you what functional difficulties patient will have.
Individual tests tell you what rehab tasks would be helpful.
89. Review Took a battery of well known tests
Developed Smoothed Norms
Identified Validity, Reliability, Sensitivity and Specificity.
Internal Validity Checks and Internal Consistency
Used pattern matching to help make DX.
Then developed Neuronet
90. Summary We have discussed:
Make up of MNB
Norms/ test selection etc
Validity and Reliability of MNB test etc.
Interpretative Method
Patterns
Discriminant functions
RIM Matrix
Neuronet analysis (new)
91. Neuronet Analysis Dr. Ronald Miller
Brigham Young University-Hawaii
Ronald.miller@byuh.edu
92. Neural Networks StatSoft, Inc. (2010). STATISTICA (data analysis software system), version 9.1. www.statsoft.com.
StatSoft, Inc. (2010). Electronic Statistics Textbook. Tulsa, OK: StatSoft. WEB: http://www.statsoft.com/textbook/.
93. Neural Networks Neural networks have a remarkable ability to derive and extract meaning, rules, and trends from complicated, noisy, and imprecise data.
They can be used to extract patterns and detect trends that are governed by complicated mathematical functions that are too difficult, if not impossible, to model using analytic or parametric techniques.
One of the abilities of neural networks is to accurately predict data that were not part of the training data set, a process known as generalization.
Given these characteristics and their broad applicability, neural networks are suitable for applications of real world problems in research and science, business, and industry.
94. The Basic Mathematical Model
Schematic of a single neuron system. The inputs x send signals to the neuron at which point a weighted sum of the signals are obtained and further transformed using a mathematical function f.
Output=f(w1x1+…+wnxn)
The output of the neuron are actually predictions of the single neuron model for a variable in the data set, which is referred to as the target t. It is believed that there is a relationship between the inputs x and the targets t, and it is the task of the neural network to model this relationship by relating the inputs to the targets via a suitable mathematical function that can be learned from examples of the data set.
95.
We aim to model or approximate a single target variable t as a function of an input variable x.
In parametric models, the input-target relationship is described by a mathematical function of closed form. By contrast, in nonparametric models, the input-target relationship is governed by an approximator (like a neural network) that cannot be represented by a standard mathematical function.
In a parametric model, once the mathematical function is chosen, all we have to do is to adjust the parameters of the assumed model so they best approximate (predict) t given an instance of x.
By contrast, nonparametric models generally make no assumptions regarding the relationship of x and t. In other words, they assume that the true underlying function governing the relationship between x and t is not known a priori, hence, the term black box. Instead, they attempt to discover a mathematical function (which often does not have a closed form) that can approximate the representation of x and t sufficiently well. The most popular examples of nonparametric models are polynomial functions with adaptable parameters and neural networks.
Since no closed form for the relationship between x and t is assumed, a nonparametric method must be sufficiently flexible to be able to model a wide spectrum of functional relationships. The higher the order of a polynomial, for example, the more flexible the model. Similarly, the more neurons a neural network has, the stronger the model becomes.
96.
A typical feedforward network has neurons arranged in a distinct layered topology.
The input layer is not really neural at all: these units simply serve to introduce the values of the input variables. The hidden and output layer neurons are each connected to all of the units in the preceding layer.
When the network is executed (used), the input variable values are placed in the input units, and then the hidden and output layer units are progressively executed.
Each of them calculates its activation value by taking the weighted sum of the outputs of the units in the preceding layer, and subtracting the threshold. The activation value is passed through the activation function to produce the output of the neuron.
When the entire network has been executed, the outputs of the output layer act as the output of the entire network.
98. Once a neural network architecture is selected, i.e., neural network type, activation functions, etc., the remaining adjustable parameters of the model are the weights connecting the inputs to the hidden neurons and the hidden neurons to the output neurons.
The process of adjusting these parameters so the network can approximate the underlying functional relationship between the inputs x and the targets t is known as training. It is in this process that the neural network learns to model the data by examples.
Although there are various methods to train neural networks, implementing most of them involve numeric algorithms that can complete the task in a finite number of iterations. The need for these iterative algorithms is mainly due to the highly nonlinear nature of neural network models for which a closed form solution is not available most of time.
An iterative training algorithm gradually adjusts the weights of the neural network so that for any given input data x the neural network can produce an output that is as close as possible to t.
Training
99. Training SANN classification performance is the percent of correct classifications.
In general, this learning process is noisy to some extent (i.e., the network answers may sometimes be more accurate in the previous cycle of training as compared to the current one) but on the average the errors reduce in size as the network learning improves. The adjustment of the weights is usually carried out using a training algorithm, which like a teacher, teaches the neural network how to adopt its weights in order to make better predictions for each and every set of input-target pair example in the data set.
The above steps are known as training. Algorithmically it is carried out using the following sequence of steps:
Present the network with an input-target pair.
Compute the predictions of the network for the targets.
Use the error function to calculate the difference between the predictions (output) of the network and the target values. Continue with steps 1 and 2 until all input-target pairs are presented to the network.
Use the training algorithm to adjust the weights of the networks so that it gives better predictions for each and every input-target. Note that steps 1-5 form one training cycle or iteration. The number of cycles needed to train a neural network model is not known as a prior but can be determined as part of the training process.
Repeat steps 1 to 5 again for a number of training cycles or iterations until the network starts producing sufficiently accurate outputs (i.e., outputs that are close enough to the targets given their input values). A typical neural network training process consists 100s of cycles.
100. All error functions used for training neural networks must provide some sort of distance measure between the targets and predictions at the location of the inputs. One common approach is to use the sum-squares error function. In this case, the network learns a discriminant function. The sum-of-squares error is simply given by the sum of differences between the target and prediction outputs defined over the entire training set. Thus:
N is the number of training cases and yi is the prediction (network outputs) of the target value ti and target values of the i th datacase. It is clear that the bigger the difference between prediction of the network and the targets, the higher the error value, which means more weight adjustment is needed by the training algorithm. The sum-of-squares error function is primarily used for regression analysis but it can also be used in classification tasks. Nonetheless, a true neural network classifier must have an error function other than sum-of-squares, namely cross entropy error function.
It is with the use of this error function together with a softmax output activation function that we can interpret the outputs of a neural network as class membership probabilities.
The cross entropy error function is given by:
which assumes that the target variables are driven from a multinomial distribution. This is in contrast to the sum-of-squares error, which models the distribution of the targets as a normal probability density function.
NOTE: The training error for regression is calculated from the sum of squares error defined over the training set. However, the calculation is performed using the pre-processed targets (scaled from 0 to 1). Similarly, the test and validations error measures are defined as the sum of squares of the individual errors defined over the test and validation samples, respectively. Note that SANN also calculates the correlation coefficients for the train, test, and validation samples. These quantities are calculated for the original (unscaled) targets.On the other hand, for classification tasks SANN uses the cross-entropy error (see above) to train the neural networks but the selection criteria for evaluating the best network is actually based on the classification rate, which can be easily interpreted as compared to the entropy based error function.
Training
101. Black Box In traditional modeling approaches (e.g., linear modeling) it is possible to algorithmically determine the model configuration that absolutely minimizes this error. The price paid for the greater (non-linear) modeling power of neural networks is that although we can adjust a network to lower its error, we can never be sure that the error could not be lower still.
A helpful concept here is the error surface. In a linear model with sum squared error function, this error surface is a parabola (a quadratic), which means that it is a smooth bowl-shape with a single minimum. It is therefore "easy" to locate the minimum.
Neural network error surfaces are much more complex, and are characterized by a number of unhelpful features, such as local minima (which are lower than the surrounding terrain, but above the global minimum), flat-spots and plateaus, saddle-points, and long narrow ravines.
It is not possible to analytically determine where the global minimum of the error surface is, and so neural network training is essentially an exploration of the error surface. From an initially random configuration of weights and thresholds (i.e., a random point on the error surface), the training algorithms incrementally seek for the global minimum. Typically, the gradient (slope) of the error surface is calculated at the current point, and used to make a downhill move. Eventually, the algorithm stops in a low point, which may be a local minimum (but hopefully is the global minimum).
102. Training Neural networks are highly nonlinear tools that are usually trained using iterative techniques.
The most recommended techniques for training neural networks are the BFGS (Broyden-Fletcher-Goldfarb-Shanno) and Scaled Conjugate Gradient algorithms (see Bishop 1995).
These methods perform significantly better than the more traditional algorithms such as Gradient Descent but they are, generally speaking, more memory intensive and computationally demanding.
Nonetheless, these techniques may require a smaller number of iterations to train a neural network given their fast convergence rate and more intelligent search criterion.
103. Evaluation There are several techniques to combat the problem of overfitting and tackling the generalization issue. The most popular ones involve the use of test data. Test data is a holdout sample that will never be used in training. Instead, it will be used as a means of validating how well a network makes progress in modeling the input-target relationship as training continues. Most work on assessing performance in neural modeling concentrates on approaches to test data. A neural network is optimized using a training set. A separate test set is used to halt training to mitigate overfitting. The process of halting neural network training to prevent overfitting and improving the generalization ability is known as early stopping. This technique slightly modifies the training algorithm to:
1. Present the network with an input-target pair from the training set.
2. Compute the predictions of the network for the targets.
3. Use the error function to calculate the difference between the predictions (output) of the network and the target values.
4. Continue with steps 1 and 2 until all input-target pairs from the training set are presented to the network.
5. Use the training algorithm to adjust the weights of the networks so that it gives better predictions for each and every input-target.
6. Pass the entire test set to the network, make predictions, and compute the value of network test error.
7. Compare the test error with the one from the previous iteration. If the error keeps decreasing, continue training; otherwise, stop training.
Note that the number of cycles needed to train a neural network model with test data and early stopping may vary. In theory, we would continue training the network for as many cycles as needed so long as the test error is on the decrease.
Validation Data
Sometimes the test data alone may not be sufficient proof of a good generalization ability of a trained neural network. For example, a good performance on the test sample may actually be just a coincidence. To make sure that this is not the case, another set of data known as the validation sample is often used. Just like the test sample, a validation sample is never used for training the neural network. Instead, it is used at the end of training as an extra check on the performance of the model. If the performance of the network was found to be consistently good on both the test and validation samples, then it is reasonable to assume that the network generalizes well on unseen data.
104. What we did N=834
Collected from Midwest area
Using MNB interpretive methods
Groups Demographics next slide
106. Demographic Description
107. Combined Had to combine
Depression
Anxiety
PCS
Normal Controls
Into a single group as Neuronet identified these groups as similar and not able to be discriminated.
Not an unusual finding
108. Neuronet Results “Neural networks are often the tools of choice when predictive accuracy is required.”
While in most cases, 20 algorithms for comparison are sufficient, we examined 200 for this analysis and selected the best one.
Further neural network analyses were performed, with one million algorithms compared, but none improved on the best of the batch of 200 previously mentioned.
109. Neuronet Results For the network, there were two parts to the analysis. First was the network creation based on a random selection of 80% of the data and following that, a validation phase which tested the network on the remaining 20% of the data to determine the alorgithm’s accuracy.
For the training performance, the accuracy was 94.42%. The MLP 94-19-4 algorithm produced 87.76% accuracy in the validation phase.
110. Our Results SEwe
112. Neuronet Results The best neural network was created by utilizing a multilayer perceptron neural network, with 94 inputs, 19 hidden layer units, and 4 outputs (MLP 94-19-4).
While Scaled Conjugate, and Gradient Descent algorithms were included in the comparison, the training algorithm for the most successful of the neural networks was the Broyden-Fletcher-Goldfarb-Shanno (BFGS) in the 31st iteration, which is one of the most recommended algorithms for neural network creation (Bishop, 1995) as it requires few iterations to complete the network training.
The network created has the hyperbolic tangent function used for the activation of the hidden and the output units.
113. Summary of what we did
Data set
Neuronet analysis
80% of cases randomly selected-produced the algorithm
20 % of case form first validation- 94% correct classification overall
114. Conclusion The neural network produced accuracies of over 90% in this trial.
Two other trials on the neuronet with new subjects produced accuracies of 90% or better
Overall, this test appears to have been a success
115. Use of a Neuronet Analysis at Schofield Barracks Dr. Chad Grills. Ph.D.
Schofield Barracks, Hawaii
chad.grills@us.army.mil
116. Disclaimer The views expressed in this presentation are those of the author and do not reflect the official policy or position of the Department of the Army, Department of Defense or the U.S. Government.
117. Focus of Project Identify the accuracy in specific diagnosis determination as verified by the neuronet analysis
118. Process of Project Reviewed 100 Consecutive referrals for TBI assessment
Active Duty Military
All previously deployed
All were evaluated neuropsychologically, due to cognitive complaints.
Used Regular MNB interpreting methods.
Diagnosis was determined without benefit of neuronet analysis.
Reviewed only those Charts with DXs of either TBI, PTSD, Malingering/Response Bias or Other (Dep/Anxiety/Normal/PCS)
If DX was LD, ADD, Sleep Apnea etc. then not included since not included in Neuronet
119. Demographics of Soldiers Mean
Age 29.3 (+6.87) years
Ed 12.36 (+1.4) years
92 Right Handed, 8 Left handed
5 Female, 95 Male
Ethnicity: AA =13,Ca=63, Asian=5, Hisp=14, Pacific Islander=5
120. Diagnoses Classified
121. Classification of Diagnoses Correct Classification Rate overall: 90%
Results similar to:
Meyers and Rohling (2004) showed a 96.1 % correct classification rate for a discriminant function of mild TBI vs a group of normal controls, depressed, and chronic pain patients.
122. Conclusions Using standard MNB interpretation system we are consistent 90% of the time
Or
The neuronet is 90% accurate compared to clinician, in a military population
123. Questions
?
124. Before we move on to Neuronet Any Questions
Feel free to call John Meyers 712-251-7545
Email: jmeyersneuro@yahoo.com
www.meyersneuropsychological.com