530 likes | 719 Views
Medical Diagnosis Decision-Support System: Optimizing Pattern Recognition of Medical Data. W. Art Chaovalitwongse Industrial & Systems Engineering Rutgers University Center for Discrete Mathematics & Theoretical Computer Science (DIMACS)
E N D
Medical Diagnosis Decision-Support System: Optimizing Pattern Recognition of Medical Data W. Art Chaovalitwongse Industrial & Systems Engineering Rutgers University Center for Discrete Mathematics & Theoretical Computer Science (DIMACS) Center for Advanced Infrastructure & Transportation (CAIT) Center for Supply Chain Management, Rutgers Business School This work is supported in part by research grants from NSF CAREER CCF-0546574, and Rutgers Computing Coordination Council (CCC).
Outline • Introduction • Classification: Model-Based versus Pattern-Based • Medical Diagnosis • Pattern-Based Classification Framework • Application in Epilepsy • Seizure (Event) Prediction • Identify epilepsy and non-epilepsy patients • Application in Other Diagnosis Data • Conclusion and Envisioned Outcome
Pattern Recognition:Classification Supervised learning: A class (category) label for each pattern in the training set is provided. Positive Class Negative Class ?
Model-Based Classification • Linear Discriminant Function • Support Vector Machines • Neural Networks
Support Vector Machine • A and B are data matrices of normal and pre-seizure, respectively • e is the vector of ones • is a vector of real numbers • is a scalar • u, vare the misclassification errors Mangasarian, Operations Research (1965); Bradley et al., INFORMS J. of Computing (1999)
Compute Distance Test Record Training Records Choose k of the “nearest” records Pattern-Based Classification: Nearest Neighbor Classifiers • Basic idea: • If it walks like a duck, quacks like a duck, then it’s probably a duck
Traditional Nearest Neighbor K-nearest neighbors of a record x are data points that have the k smallest distance to x
Drawbacks • Feature Selection • Sensitive to noisy features • Optimizing feature selection • n features, 2n combinations combinatorial optimization • Unbalanced Data • Biased toward the class (category) with larger samples • Distance weighted nearest neighbors • Pick the k nearest neighbors from each class (category) to the training sample and compare the average distances.
Multidimensional Time Series Classification in Medical Data • Positive versus Negative • Responsive versus Unresponsive • Multidimensional Time Series Classification • Multisensor medical signals (e.g., EEG, ECG, EMG) • Multivariate is ideal but computationally impossible • It is very common that physicians always use baseline data as a reference for diagnosis • The use of baseline data - naturally lends itself to nearest neighbor classification Normal Abnormal ?
Ensemble Classification for Multidimensional time series data • Use each electrode as a base classifier • Each base classifier makes its own decision • Multiple decision makers - How to combine them? • Voting the final decision • Averaging the prediction score • Suppose there are 25 base classifiers • Each classifier has error rate, = 0.35 • Assume classifiers are independent • Probability that the ensemble classifier makes a wrong prediction (voting):
Modified K-Nearest Neighbor for MDTS Normal Abnormal K = 3 D(X,Y) Time series distances: (1) Euclidean, (2) T-Statistical, (3) Dynamic Time Warping
Dynamic Time Warping (DTW) The minimum-distance warp path is the optimal alignment of two time series, where the distance of a warp path W is: is the Euclidean distance of warp path W. is the distance between the two data point indices (from Liand Lj) in the kth element of the warp path. Dynamic Programming: The optimal warping distance is Figure B) Is from Keogh and Pazzani, SDM (2001)
Support Feature Machine • Given an unlabeled sample A, we calculate average statistical distances of A↔NormalandA↔Abnormalsamples in baseline (training) dataset per electrode (channel). • Statistical distances: Euclidean, T-statistics, Dynamic Time Warping • Combining all electrodes, A will be classified to the group (normal or abnormal) that yields • the minimum average statistical distance; or • themaximumnumber of votes • Can we select/optimize the selection of a subset of electrodes that maximizes number of correctly classified samples
SFM: Averaging and Voting • Averaging: If for Sample i (on average of selected electrodes) Averageintra-class distance over all electrodes Averageinter-class distance over all electrodes < We claim that Sample i is correctly classified. • Voting: If for Sample i at Electrodej (vote) Intra-class distance < Inter-class distance (good vote) Based on selectedelectrodes, if # of good votes > # of bad votes, then Sample iis correctly classified. • Two distances for each sample at each electrode are calculated: • Intra-Class:Average distance from each sample to all other samples in the same class at Electrode j • Inter-Class:Average distance from each sample to all other samples in different class at Electrode j Chaovalitwongse et al., KDD (2007) and Chaovalitwongse et al., Operations Research (forthcoming)
Distance Averaging: Training ∙∙∙ Sample i at Feature 1 Sample i at Feature 2 Sample i at Feature m Select a subset of features ( ) such that as many samples as possible. Industrial & Systems Engineering Rutgers University
Majority Voting: Training Positive Negative Negative Positive i Feature j Feature j i’ (Correct) if ; (Incorrect) otherwise. Industrial & Systems Engineering Rutgers University
Intra-Class Inter-Class SFM Optimization Model Chaovalitwongse et al., KDD (2007) and Chaovalitwongse et al., Operations Research (forthcoming)
Averaging SFM Maximize the number of correctly classified samples Logical constraints on intra-class and inter-class distances if a sample is correctly classified Must select at least one electrode Chaovalitwongse et al., KDD (2007) and Chaovalitwongse et al., Operations Research (forthcoming)
Voting SFM Maximize the number of correctly classified samples Logical constraints: Must win the voting if a sample is correctly classified Must select at least one electrode Precision matrix, A contains elements of Chaovalitwongse et al., KDD (2007) and Chaovalitwongse et al., Operations Research (forthcoming)
Support Vector Machine Feature 3 Pre-Seizure Feature 2 Normal Feature 1
Facts about Epilepsy • About 3 million Americans and other 60 million people worldwide (about 1% of population) suffer from Epilepsy. • Epilepsy is the second most common brain disorder (after stroke), which causes recurrent seizures (not vice versa). • Seizures usually occur spontaneously, in the absence of external triggers. • Epileptic seizures occur when a massive group of neurons in the cerebral cortex suddenly begin to discharge in a highly organized rhythmic pattern. • Seizures cause temporary disturbances of brain functions such as motor control, responsiveness and recall which typically last from seconds to a few minutes. • Based on 1995 estimates, epilepsy imposes an annual economic burden of $12.5 billion* in the U.S. in associated health care costs and losses in employment, wages, and productivity. • Cost per patient ranged from $4,272 for persons** with remission after initial diagnosis and treatment to $138,602 for persons** with intractable and frequent seizures. *Begley et al., Epilepsia (2000); **Begley et al., Epilepsia (1994).
Simplified EEG System and Intracranial Electrode Montage Electroencephalogram (EEG) is a traditional tool for evaluating the physiological state of the brainby measuring voltage potentials produced by brain cells while communicating
Scalp EEG Acquisition 18 Bipolar Channels
Goals: How can we help? • Seizure Prediction • Recognizing (data-mining) abnormality patterns in EEG signals preceding seizures • Normal versus Pre-Seizure • Alert when pre-seizure samples are detected (online classification) • e.g., statistical process control in production system, attack alerts from sensor data, stock market analysis • EEG Classification: Routine EEG Check • Quickly identify if the patients have epilepsy • Epilepsy versus Non-Epilepsy • Many causes of seizures: Convulsive or other seizure-like activity can be non-epileptic in origin, and observed in many other medical conditions. These non-epileptic seizures can be hard to differentiate and may lead to misdiagnosis. • e.g., medical check-up, normal and abnormal samples
Normal Pre-Seizure Post-Seizure Seizure Onset 10-second EEGs: Seizure Evolution Chaovalitwongse et al., Annals of Operations Research (2006)
Normal 8 hours 8 hours 8 hours 8 hours Pre-seizure 30 minutes 30 minutes Seizure Seizure Duration of EEG Sampling Procedure Randomly and uniformly sample 3 EEG epochs per seizure from each of normal and pre-seizure states. For example, Patient 1 has 7 seizures. There are 21 normal and 21 pre-seizure EEG epochs sampled. Use leave-one(seizure)-out cross validation to perform training and testing.
Information/Feature Extraction from EEG Signals • Measure the brain dynamics from EEG signals • Apply dynamical measures (based on chaos theory) to non-overlapping EEG epochs of 10.24 seconds = 2048 points. • Maximum Short-Term Lyapunov Exponent • measure the stability/chaoticity of EEG signals • measure the average uncertainty along the local eigenvectors and phase differences of an attractor in the phase space Pardalos, Chaovalitwongse, et al., Math Programming (2004)
Evaluation • Sensitivitymeasures the fraction of positive cases that are classified as positive. • Specificitymeasures the fraction of negative cases classified as negative. Sensitivity = TP/(TP+FN) Specificity = TN/(TN+FP) • Type I error = 1-Specificity • Type II error = 1-Sensitivity Chaovalitwongse et al., Epilepsy Research (2005)
Leave-One-Seizure-Out Cross Validation N1 P1 SFM Selected Electrodes N2 P2 1 2 3 4 5 6 7 . . . 2324 25 26 N3 P3 N4 P4 N5 P5 Training Set Testing Set N – EEGs from Normal State P – EEGs from Pre-Seizure State assume there are 5 seizures in the recordings
EEG Classification • Support Vector Machine [Chaovalitwongse et al., Annals of OR (2006)] • Project time series data in a high dimensional (feature) space • Generate a hyperplane that separates two groups of data – minimizing the errors • Ensemble K-Nearest Neighbor [Chaovalitwongse et al., IEEE SMC: Part A (2007)] • Use each electrode as a base classifier • Apply the NN rule using statistical time series distances and optimize the value of “k” in the training • Voting and Averaging • Support Feature Machine [Chaovalitwongse et al., SIGKDD (2007); Chaovalitwongse et al., Operations Research (forthcoming)] • Use each electrode as a base classifier • Apply the NN rule to the entire baseline data • Optimize by selecting the best group of classifiers (electrodes/features) • Voting: Optimizes the ensemble classification • Averaging: Uses the concept of inter-class and intra-class distances (or prediction scores)
Performance Characteristics:Upper Bound NN -> Chaovalitwongse et al., Annals of Operations Research (2006) SFM -> Chaovalitwongse et al., SIGKDD (2007); Chaovalitwongse et al., Operations Research (forthcoming) KNN -> Chaovalitwongse et al., IEEE Trans Systems, Man, and Cybernetics: Part A (2007)
Separation of Normal and Pre-Seizure EEGs From 3 electrodes not selected by SFM From 3 electrodes selected by SFM
Performance Characteristics:Validation SVM-> Chaovalitwongse et al., Annals of Operations Research (2006) SFM -> Chaovalitwongse et al., SIGKDD (2007); Chaovalitwongse et al., Operations Research (forthcoming) KNN -> Chaovalitwongse et al., IEEE Trans Systems, Man, and Cybernetics: Part A (2007) 39
Epilepsy versusNon-EpilepsyData Set • Routine EEG check: 25-30 minutes of recordings ~ with scalp electrodes • Each sample is 5-minute EEG epoch (30 points of STLmax values). • Each sample is in the form of 18 electrodes X 30 points
Leave-One-Patient-Out Cross Validation N1 E1 SFM Selected Electrodes N2 E2 1 2 3 4 5 6 7 . . . 2324 25 26 N3 E3 N4 E4 N5 E5 Training Set Testing Set N – Non-Epilepsy P – Epilepsy
1 Fp1 – C3 16 T6 – Oz 17 Fz – Oz
Other Medical Datasets • Breast Cancer • Features of Cell Nuclei (Radius, perimeter, smoothness, etc.) • Malignant or Benign Tumors • Diabetes • Patient Records (Age, body mass index, blood pressure, etc.) • Diabetic or Not • Heart Disease • General Patient Info, Symptoms (e.g., chest pain), Blood Tests • Identify Presence of Heart Disease • Liver Disorders • Features of Blood Tests • Detect the Presence of Liver Disorders from Excessive Alcohol Consumption
Performance Training Testing
Medical Data Signal Processing Apparatus (MeDSPA) • Quantitative analyses of medical data • Neurophysiological data (e.g., EEG, fMRI) acquired during brain diagnosis • Envisioned to be an automated decision-support system configured to accept input medical signal data (associated with a spatial position or feature) and provide measurement data to help physicians obtain a more confident diagnosis outcome. • To improve the current medical diagnosis and prognosis by assisting the physicians • recognizing (data-mining) abnormality patterns in medical data • recommending the diagnosis outcome (e.g., normal or abnormal) • identifying a graphical indication (or feature) of abnormality (localization)