1 / 18

Presentation agenda

Neural Network Ensemble based on Feature Selection for Non-Invasive Recognition of Liver Fibrosis Stage. Bartosz KRAWCZYK, Michał WOŹNIAK, Tomasz ORCZYK, Piotr PORWIK, Joanna MUSIALIK, Barbara BŁOŃSKA-FAJFROWSKA. Presentation agenda. Overview. Current diagnostic methods. Proposed method.

garron
Download Presentation

Presentation agenda

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Network Ensemble based on FeatureSelection for Non-Invasive Recognition of Liver Fibrosis Stage Bartosz KRAWCZYK, Michał WOŹNIAK, Tomasz ORCZYK,Piotr PORWIK,Joanna MUSIALIK,Barbara BŁOŃSKA-FAJFROWSKA

  2. Presentation agenda • Overview. • Current diagnostic methods. • Proposed method. • Analyzed data. • Data analysis methods. • Result comparison. • Conclusions.

  3. Overview • Liver fibrosis: • Accumulation of tough, fibrous scar tissue in the liver. • ~1,75% of Poland’s population is infected with HCV. • Unthreated may cause Liver Cirrhosis and death. • Risk factors: • Chronic infection with hepatitis C or hepatitis B virus (HCV, HBV). • Immune system compromise (HIV or immunosuppressive drugs). • Heavy alcohol consumption. • Gradation indexes: • Knodell Histological Activity Index (HAI Score). • Ishak system. • METAVIR system.

  4. Current diagnostic methods • Invasive • Liver biopsy • Risk of health complications or even death. • Up to 45% uncertainty depending on bioptate quality and size. • Still assumed as a „gold standard”. • Non-invasive • ELF Test • FibroTest & FibroScan • Expensive • Not very accurate

  5. Proposed method • Non-invasive • Blood test based • Inexpensive • Only regular blood tests • Comparable with other non-invasive methods • Similar error level to FibroTest

  6. Proposed method:Analyzed data and problems • Data characteristics: • 127 patients mostly with HCV (70%) and Liver Fibrosis. • All patients otherwise healthy and not under therapy. • 34 parameters measured. • Problems: • Low data samples count. • Unequal distribution of diagnosed fibrosis level. • Incomplete records. • Many poorquality biopsies.

  7. Proposed method:Analyzed data and problems

  8. Proposed method:Neural Network Ensemble The introduced method of classifier ensemble designconsists of threemain steps: • Building the pool of individual classifiers. • Pruning the acquired pool by discarding redundant predictors. • Using a sophisticated trained fuser to deliver the ensemble.

  9. Proposed method:Building the pool of classifiers • Models should be complementary to each other, exhibiting at the same time high accuracy and high diversity. • There is no single optimal approach for feature selection task and results obtained on the basis of different methods may differ significantly. • Instead of selecting a single bestfeature selection method we use several of them to reduce the dimensionality of the feature space.

  10. Proposed method:Ensemble pruning • There are several different ways in the literature on how to select valuable members to the committee. • Ideal ensemble consists of classifiers of high individual accuracy and high diversity. • Among diversity measures there are two major types: • Pairwise (shows how two classifiers differ from each other). • Non-pairwise (measure the diversity of the whole ensemble). • For measuring the diversity of whole ensemble we used the entropy measure.

  11. Proposed method:Fusion of individual classifiers • Classifier fusion algorithms can make decisions on the basis of class labels given by individual classifiers or they can construct new discriminant functions on the basis of individual classifier support functions: • The first group includes voting algorithms. • The second group is based on discriminant analysis. • The design of improved fusion classification models, especially trained fusers, is the focus of current research.

  12. Proposed method: Fusion of individual classifiers Assume that we have K classifiers in a pool after the pruning procedure. For a given object each individual classifier decides for class based on the values of discriminants. Let denote a function that is assigned to class i for a given value of x, and that is used by the l-th classifier . The combined classifier uses the decision rule , where . The weights can be set dependent on the classifier and class number: weight is assigned to the l-th classifier and the i-th class, and given classifier weights assigned to different classes may differ.

  13. Proposed method:Featureselectionalgorithms Eightdifferentfeatureselectionalgorithmswereused, namely: • ReliefF, • Fast CorrelationBasedFilter, • GeneticWrapper, • SimulatedAnnealingWrapper, • ForwardSelection, • BackwardSelection, • QuickBranch & Bound, • Las Vegas Incremental. Neural network architecture was as follows: the number of neurons in the input layer was equal to the number of selected features, the number of output neurons was equal to the number of classes and the number of hidden neurons was equal to the half of the sum of number of neurons from the former layers.

  14. Proposed method:Set-up • As reference methods we have selected most popular ensembles - Bagging, Boosting, Random Forest and Random Subspace. • Additionally we havecompared our method with the single best classier from the pool, allclassifiers from the pool and withsimple majority voting. • The combined 5x2 CV F test [1] was carried out toasses the statisticalsignicance of obtainedresults.

  15. Proposed method:Results

  16. Proposed method:Results • The proposed neural network ensemble, based on feature selection methods, has outperformed all the previously used MCS for this problem. • The weakest results were returned by single best model approach, which highlights the usefulnessof utilizing more than one classier to fully exploit theoutputs of feature selection methods. • Second biggest accuracy boost lies in the used fuser-trained fusion of individual classiers allows to derive an optimal linear combination of them. • The pruning step had smallest but stillstatistically significant impact on the ensemble design.

  17. Conclusions: • The presented paper shows that, despite some problems it is possible to reach similar or even lower error level than commercial tests. • It is also worth to mention that liver biopsy result, according to the other research, is also only a prediction with classification error varying from 35% up to 45% , depending on the sample size and count. • we proved that each of the three steps embedded in the proposed committee design has an important impact onthe quality of the final prediction and thus should not be omitted.

  18. Thank you for your attention Contact: bartosz.krawczyk@pwr.wroc.pl tomasz.orczyk@us.edu.pl

More Related