150 likes | 232 Views
Using Real-Valued Meta Classifiers to Integrate Binding Site Predictions. Yi Sun, Mark Robinson, Rod Adams, Paul Kaye, Alistair G. Rust, Neil Davey University of Hertfordshire, 2005. Outline. Problem Domain Description of the Datasets Experimental Techniques Experiments Summary.
E N D
Using Real-Valued Meta Classifiers to Integrate Binding Site Predictions Yi Sun, Mark Robinson, Rod Adams, Paul Kaye, Alistair G. Rust, Neil Davey University of Hertfordshire, 2005
Outline • Problem Domain • Description of the Datasets • Experimental Techniques • Experiments • Summary
Problem Domain (1) • One of the most exciting and active areas of research in biology currently, is understanding the regulation of gene expression. • It is known that many of the mechanisms of gene regulation take place directly at the transcriptional or sequence level.
Problem Domain (2) • Transcription factors will bind to a number of different but related sequences, thereby effecting changes in the expression of genes. The current state of the art algorithms for transcription factor binding site prediction are, in spite of recent advances, still severely limited in accuracy.
Description of the Datasets (1) • The original dataset has 68910possible binding sites. • A prediction result for each of 12 algorithms. • Single sequence algorithms (7); • Coregulatory algorithms (3); • Comparative algorithm (1); • Evolutionary algorithm (1). • It includes two classes labelled as either binding sites or non-binding sites with about 93% being non-binding sites.
Description of the Datasets (2) Fig. 1. Organisation of dataset, showing alignment of algorithmic predictions, known information and original DNA sequence data.
Description of the Datasets (3)Windowing Fig. 2. The window size is set to 7 in this study. The middle label of 7 continuous prediction sites is the label for a new windowed inputs. The length of each windowed input now is 12X7.
Imbalanced Data (93% being Non-binding Sites) Sampling Techniques for Imbalanced Dataset Learning • For the under-sampling, we randomly selected a subset of data points from the majority class. • The synthetic minority over-sampling technique (SMOTE) proposed by N.V.Chawla, et al,. is applied for the over-sampling. • For each pattern in the minority class, we search for its K-nearest neighbours in the minority class using Euclidean distance. • For continuous features, the difference of each feature between the pattern and its nearest neighbour is taken, and then multiplied by a random number between 0 and 1, and added to the corresponding feature of the pattern. • For binary features, the majority voting principle to each element of the K-nearest neighbours in the feature vector space is employed.
Experimental Techniques • Majority Voting (MV); • Weighted Majority Voting (WMV); • Single Layer Networks (SLN); • Rules Sets (C4.5-Rules); • Support Vector Machines (SVM). The Classification Techniques
Performance Metrics A confusion matrix
Experiments (1)Consistent Dataset Table 1: Common performance metrics (%) tested on the same consistent possible binding sites with single and windowed inputs separately.
Experiments (2)Full Dataset Table 2: Common performance metrics (%) tested on the full test dataset with single and windowed inputs separately.
Experiments (3) Fig. 3. ROC graph: five classifiers applied to the consistent test set with single inputs.
Experiments (4) Fig. 4. ROC graph: 3 classifiers applied to the full test set with windowed inputs.
Summary • By integrating the 12 algorithms we considerably improve binding site prediction using the SVM. • Employing a ‘window’ of consecutive results in the input vector can contextualise the neighbouring results, so that it can use the distribution of data to improve binding site prediction.