860 likes | 1.08k Views
Lecture 9 Pattern recognition. Bioinformatics Master Course Bioinformatics Data Analysis and Tools. Patterns Some are easy some are not. Knitting patterns Cooking recipes Pictures (dot plots) Colour patterns Maps Protein structures Protein sequences Protein interaction maps.
E N D
Lecture 9Pattern recognition Bioinformatics Master Course Bioinformatics Data Analysis and Tools
PatternsSome are easy some are not • Knitting patterns • Cooking recipes • Pictures (dot plots) • Colour patterns • Maps • Protein structures • Protein sequences • Protein interaction maps
Example of algorithm reuse: Data clustering • Many biological data analysis problems can be formulated as clustering problems • microarray gene expression data analysis • identification of regulatory binding sites (similarly, splice junction sites, translation start sites, ......) • (yeast) two-hybrid data analysis (for inference of protein complexes) • phylogenetic tree clustering (for inference of horizontally transferred genes) • protein domain identification • identification of structural motifs • prediction reliability assessment of protein structures • NMR peak assignments • ......
Data Clustering Problems • Clustering: partition a data set into clusters so thatdata points of the same cluster are “similar” and points of different clusters are “dissimilar” • cluster identification-- identifying clusters with significantly different features than the background
Application Examples • Regulatory binding site identification: CRP (CAP) binding site • Two hybrid data analysis • Gene expression data analysis Are all solvable by the same algorithm!
Other Application Examples • Phylogenetic tree clustering analysis (Evolutionary trees) • Protein sidechain packing prediction • Assessment of prediction reliability of protein structures • Protein secondary structures • Protein domain prediction • NMR peak assignments • ……
Multivariate statistics – Cluster analysis C1 C2 C3 C4 C5 C6 .. 1 2 3 4 5 Raw table Similarity criterion Similarity matrix Scores 5×5 Cluster criterion Dendrogram
Human Evolution Gaps in the knowledge domain…
Comparing sequences - Similarity Score - • Many properties can be used: • Nucleotide or amino acid composition • Isoelectric point • Molecular weight • Morphological characters • But: molecular evolution through sequence alignment
Multivariate statistics – Cluster analysis Now for sequences Multiple sequence alignment 1 2 3 4 5 Similarity criterion Similarity matrix Scores 5×5 Phylogenetic tree
Human -KITVVGVGAVGMACAISILMKDLADELALVDVIEDKLKGEMMDLQHGSLFLRTPKIVSGKDYNVTANSKLVIITAGARQ Chicken -KISVVGVGAVGMACAISILMKDLADELTLVDVVEDKLKGEMMDLQHGSLFLKTPKITSGKDYSVTAHSKLVIVTAGARQ Dogfish –KITVVGVGAVGMACAISILMKDLADEVALVDVMEDKLKGEMMDLQHGSLFLHTAKIVSGKDYSVSAGSKLVVITAGARQ Lamprey SKVTIVGVGQVGMAAAISVLLRDLADELALVDVVEDRLKGEMMDLLHGSLFLKTAKIVADKDYSVTAGSRLVVVTAGARQ Barley TKISVIGAGNVGMAIAQTILTQNLADEIALVDALPDKLRGEALDLQHAAAFLPRVRI-SGTDAAVTKNSDLVIVTAGARQ Maizey casei -KVILVGDGAVGSSYAYAMVLQGIAQEIGIVDIFKDKTKGDAIDLSNALPFTSPKKIYSA-EYSDAKDADLVVITAGAPQ Bacillus TKVSVIGAGNVGMAIAQTILTRDLADEIALVDAVPDKLRGEMLDLQHAAAFLPRTRLVSGTDMSVTRGSDLVIVTAGARQ Lacto__ste -RVVVIGAGFVGASYVFALMNQGIADEIVLIDANESKAIGDAMDFNHGKVFAPKPVDIWHGDYDDCRDADLVVICAGANQ Lacto_plant QKVVLVGDGAVGSSYAFAMAQQGIAEEFVIVDVVKDRTKGDALDLEDAQAFTAPKKIYSG-EYSDCKDADLVVITAGAPQ Therma_mari MKIGIVGLGRVGSSTAFALLMKGFAREMVLIDVDKKRAEGDALDLIHGTPFTRRANIYAG-DYADLKGSDVVIVAAGVPQ Bifido -KLAVIGAGAVGSTLAFAAAQRGIAREIVLEDIAKERVEAEVLDMQHGSSFYPTVSIDGSDDPEICRDADMVVITAGPRQ Thermus_aqua MKVGIVGSGFVGSATAYALVLQGVAREVVLVDLDRKLAQAHAEDILHATPFAHPVWVRSGW-YEDLEGARVVIVAAGVAQ Mycoplasma -KIALIGAGNVGNSFLYAAMNQGLASEYGIIDINPDFADGNAFDFEDASASLPFPISVSRYEYKDLKDADFIVITAGRPQ Lactate dehydrogenase multiple alignment Distance Matrix 1 2 3 4 5 6 7 8 9 10 11 12 13 1 Human 0.000 0.112 0.128 0.202 0.378 0.346 0.530 0.551 0.512 0.524 0.528 0.635 0.637 2 Chicken 0.112 0.000 0.155 0.214 0.382 0.348 0.538 0.569 0.516 0.524 0.524 0.631 0.651 3 Dogfish 0.128 0.155 0.000 0.196 0.389 0.337 0.522 0.567 0.516 0.512 0.524 0.600 0.655 4 Lamprey 0.202 0.214 0.196 0.000 0.426 0.356 0.553 0.589 0.544 0.503 0.544 0.616 0.669 5 Barley 0.378 0.382 0.389 0.426 0.000 0.171 0.536 0.565 0.526 0.547 0.516 0.629 0.575 6 Maizey 0.346 0.348 0.337 0.356 0.171 0.000 0.557 0.563 0.538 0.555 0.518 0.643 0.587 7 Lacto_casei 0.530 0.538 0.522 0.553 0.536 0.557 0.000 0.518 0.208 0.445 0.561 0.526 0.501 8 Bacillus_stea 0.551 0.569 0.567 0.589 0.565 0.563 0.518 0.000 0.477 0.536 0.536 0.598 0.495 9 Lacto_plant 0.512 0.516 0.516 0.544 0.526 0.538 0.208 0.477 0.000 0.433 0.489 0.563 0.485 10 Therma_mari 0.524 0.524 0.512 0.503 0.547 0.555 0.445 0.536 0.433 0.000 0.532 0.405 0.598 11 Bifido 0.528 0.524 0.524 0.544 0.516 0.518 0.561 0.536 0.489 0.532 0.000 0.604 0.614 12 Thermus_aqua 0.635 0.631 0.600 0.616 0.629 0.643 0.526 0.598 0.563 0.405 0.604 0.000 0.641 13 Mycoplasma 0.637 0.651 0.655 0.669 0.575 0.587 0.501 0.495 0.485 0.598 0.614 0.641 0.000
Multivariate statistics – Cluster analysis C1 C2 C3 C4 C5 C6 .. 1 2 3 4 5 Data table Similarity criterion Similarity matrix Scores 5×5 Cluster criterion Dendrogram/tree
Multivariate statistics – Cluster analysisWhy do it? • Finding a true typology • Model fitting • Prediction based on groups • Hypothesis testing • Data exploration • Data reduction • Hypothesis generation But you can never prove a classification/typology!
Cluster analysis – data normalisation/weighting C1 C2 C3 C4 C5 C6 .. 1 2 3 4 5 Raw table Normalisation criterion C1 C2 C3 C4 C5 C6 .. 1 2 3 4 5 Normalised table Column normalisation x/max Column range normalise (x-min)/(max-min)
Cluster analysis – (dis)similarity matrix C1 C2 C3 C4 C5 C6 .. 1 2 3 4 5 Raw table Similarity criterion Similarity matrix Scores 5×5 Di,j= (k | xik – xjk|r)1/r Minkowski metrics r = 2 Euclidean distance r = 1 City block distance
Cluster analysis – Clustering criteria Similarity matrix Scores 5×5 Cluster criterion Dendrogram (tree) Single linkage - Nearest neighbour Complete linkage – Furthest neighbour Group averaging – UPGMA Ward Neighbour joining – global measure
Cluster analysis – Clustering criteria • Start with N clusters of 1 object each • Apply clustering distance criterion iteratively until you have 1 cluster of N objects • Most interesting clustering somewhere in between distance Dendrogram (tree) 1 cluster N clusters
Single linkage clustering (nearest neighbour) Char 2 Char 1
Single linkage clustering (nearest neighbour) Char 2 Char 1
Single linkage clustering (nearest neighbour) Char 2 Char 1
Single linkage clustering (nearest neighbour) Char 2 Char 1
Single linkage clustering (nearest neighbour) Char 2 Char 1
Single linkage clustering (nearest neighbour) Char 2 Char 1 Distance from point to cluster is defined as the smallest distance between that point and any point in the cluster
Cluster analysis – Ward’s clustering criterion Per cluster: calculate Error Sum of Squares (ESS) ESS = x2 – (x)2/n calculate minimum increase of ESS Suppose: Obj Val c l u s t e r i n g ESS 1 1 1 2 3 4 5 0 2 2 1 2 3 4 5 0.5 3 7 1 2 3 4 5 2.5 4 9 1 2 3 4 5 13.1 5 12 1 2 3 4 5 86.8
Partitional Clustering • divide instances into disjoint clusters – flat vs. tree structure • key issues – how many clusters should there be? – how should clusters be represented?
Partitional Clustering from aHierarchical Clustering we can always generate a partitional clustering from ahierarchical clustering by “cutting” the tree at some level
K-Means Clustering • assume our instances are represented by vectors of real values • put k cluster centers in same space as instances • now iteratively move cluster centers
K-Means Clustering • each iteration involves two steps: • assignment of instances to clusters • re-computation of the means
K-Means Clustering • in k-means clustering, instances are assigned to one and only one cluster • can do “soft” k-means clustering via Expectation Maximization (EM) algorithm • each cluster represented by a normal distribution • E step: determine how likely is it that each cluster “generated” each instance • M step: move cluster centers to maximize likelihood of instances
Ecogenomics Algorithm that maps observed clustering behaviourof sampled gene expression data onto the clusteringbehaviour ofcontaminant labelled gene expression patterns in theknowledge base: Sample Compatibility scores … … Condition n (contaminant n) Condition 1 (contaminant 1) Condition 2 (contaminant 2) Condition 3 (contaminant 3)
Genome-Wide Cluster AnalysisEisen dataset • Eisen et al., PNAS 1998 • S. cerevisiae (baker’s yeast) – all genes (~ 6200) on a single array – measured during several processes • human fibroblasts – 8600 human transcripts on array – measured at 12 time points during serum stimulation
The Eisen Data • 79 measurements for yeast data • collected at various time points during – diauxic shift (shutting down genes for metabolizing sugars, activating those for metabolizing ethanol) – mitotic cell division cycle – sporulation – temperature shock – reducing shock
The Data • each measurement represents Log(Redi/Greeni) where red is the test expression level, and green is the reference level for gene G in the i th experiment • the expression profile of a gene is the vector of measurements across all experiments [G1 .. Gn]
The Data • m genes measured in n experiments: g1,1 ……… g1,n g2,1 ………. g2,n gm,1 ………. gm,n Vector for 1 gene
Eisen et al. Results • redundant representations of genes cluster together • but individual genes can be distinguished from related genes by subtle differences in expression • genes of similar function cluster together • e.g. 126 genes strongly down-regulated in response to stress
Eisen et al. Results • 126 genes down-regulated in response to stress • 112 of the genes encode ribosomal and other proteins related to translation • agrees with previously known result that yeast responds to favorable growth conditions by increasing the production of ribosomes
Graphs - definition 0 1 1.5 2 5 6 7 9 1 0 2 1 6.5 6 8 8 1.5 2 0 1 4 4 6 5.5 . . . Graph Adjacency matrix An undirected graph has a symmetric adjacency matrix A digraph typically has a non-symmetric adjacency matrix
0 1 1.5 2 5 6 7 9 1 0 2 1 6.5 6 8 8 1.5 2 0 1 4 4 6 5.5 . . . A Theoretical Framework • Representation of a set of n-dimensional (n-D) points as a graph • each data point represented as a node • each pair of points represented as an edge with a weight defined by the “distance” between the two points graph representation distance matrix n-D data points
A Theoretical Framework • Spanning tree: a sub-graph that has all nodes connected and has no cycles • Minimum spanning tree: a spanning tree with the minimum total distance (a) (c) (b)
4 4 4 4 7 7 7 4 5 3 3 8 7 10 5 (b) (c) (d) (e) 3 6 Spanning tree • Prim’s algorithm(graph, tree) • step 1: select an arbitrary node as the current tree • step 2: find an external node that is closest to the tree, and add it with its corresponding edge into tree • step 3: continue steps 1 and 2 till all nodes are connected in tree. (a)
Spanning tree • Kruskal’s algorithm • step 1: consider edges in non-decreasing order • step 2: if edge selected does not form cycle, then add it into tree; otherwise reject • step 3: continue steps 1 and 2 till all nodes are connected in tree. 4 4 4 4 4 8 7 7 10 5 5 5 5 3 3 3 3 3 3 6 6 (a) (b) (c) (d) (e) (f) reject
Multivariate statistics – Cluster analysis C1 C2 C3 C4 C5 C6 .. 1 2 3 4 5 Data table Similarity criterion Similarity matrix Scores 5×5 Cluster criterion Phylogenetic tree
Multivariate statistics – Cluster analysis C1 C2 C3 C4 C5 C6 1 2 3 4 5 Similarity criterion Scores 6×6 Cluster criterion Scores 5×5 Cluster criterion Make two-way ordered table using dendrograms