200 likes | 374 Views
Gaussian Mixture Model classification of Multi-Color Fluorescence In Situ Hybridization (M-FISH) Images. Amin Fazel 2006. Department of Computer Science and Electrical Engineering University of Missouri – Kansas City. Motivation and Goals. Chromosomes store genetic information
E N D
Gaussian Mixture Modelclassificationof Multi-Color Fluorescence In Situ Hybridization (M-FISH) Images Amin Fazel 2006 Department of Computer Science and Electrical EngineeringUniversity of Missouri – Kansas City
Motivation and Goals • Chromosomes store genetic information • Chromosome images can indicate genetic disease, cancer, radiation damage, etc. • Research goals: • Locate and classify each chromosome in an image • Locate chromosome abnormalities CS and EE Department UMKC
Banding Patterns Karyotype Karyotyping • 46 human chromosomes form 24 types • 22 different pairs • 2 sex chromosomes, X and Y • Grouped and ordered by length CS and EE Department UMKC
Healthy Male Multi-spectral Chromosome Imaging • Multiplex Fluorescence In-Situ Hybridization (M-FISH) [1996] • Five color dyes (fluorophores) • Each human chromosome type absorbs a unique combination of the dyes • 32 (25) possible combinations of dyes distinguish 24 human chromosome types CS and EE Department UMKC
M-FISH Image5 Dyes DAPI Channel6th Dye M-FISH Images • 6th dye (DAPI) binds to all chromosomes CS and EE Department UMKC
M-FISH Images • Images of each dye obtained with appropriate optical filter • Each pixel a six dimensional vector • Each vector element gives contribution of a dye at pixel • Chromosomal origin distinguishable at single pixel (unless overlapping) • Unnecessary to estimate length, relative centromere position, or banding pattern CS and EE Department UMKC
Bayesian Classification • Based on probability theory • A feature vector is denoted as • x = [x1; x2; : : : ; xD]T • D is the dimension of a vector • The probability that a feature vector x belongs to class wk is p(wk|x) and this posteriori probability can be computed via • and Probability density function of class wk Prior probability CS and EE Department UMKC
Gaussian Probability Density Function • In the D-dimensional space • is the mean vector • is the covariance matrix • In the Gaussian distribution lies an assumption that the class model is truly a model of one basic class CS and EE Department UMKC
Gaussian mixture model GMM • GMM is a set of several Gaussians which try to represent groups / clusters of data • therefore represent different subclasses inside one class • The PDF is defined as a weighted sum of Gaussians CS and EE Department UMKC
1 1 1 211 222 233 0 0 0 0 -1 = 0 0 Gaussian Mixture Models Equations for GMMs: multi-dimensional case: becomes vector , becomes covariance matrix . assume is diagonal matrix: CS and EE Department UMKC
GMM • Gaussian Mixture Model (GMM) is characterized by • the number of components, • the means and covariance matrices of the Gaussian components • the weight (height) of each component CS and EE Department UMKC
likelihood value1 value2 value2 GMM • GMM is the same dimension as the feature space (6-dimensional GMM) • for visualization purposes, here are 2-dimensional GMMs: CS and EE Department UMKC
GMM • These parameters are tuned using a iterative procedure called the Expectation Maximization (EM) • EM algorithm: recursively updates distribution of each Gaussian model and conditional probability to increase the maximum likelihood. CS and EE Department UMKC
GMM Training Flow Chart (1) • Initialize the initial Gaussian means μi using the K-means clustering algorithm • Initialize the covariance matrices to the distance to the nearest cluster • Initialize the weights 1 / C so that all Gaussian are equally likely • K-means clustering 1. Initialization:random or max. distance. 2. Search: for each training vector, find the closest code word, assign this training vector to that cell 3. Centroid Update: for each cell, compute centroid of that cell. The new code word is the centroid. 4. Repeat (2)-(3) until average distance falls below threshold CS and EE Department UMKC
GMM Training Flow Chart (2) E step: Computes the conditional expectation of the complete log-likelihood, (Evaluate the posterior probabilities that relate each cluster to each data point in the conditional probability) assuming the current cluster parameters to be correct M step: Find the cluster parameters that maximize the likelihood of the data assuming that the current data distribution is correct. CS and EE Department UMKC
GMM Training Flow Chart (3) • recompute wn,c using the new weights, means and covariances. Stop training if • wn+1,c -wn,c < threshold • Or the number of epochs reach the specified value. Otherwise, continue the iterative updates. CS and EE Department UMKC
GMM Test Flow Chart • Present each input pattern x and compute the confidence for each class k: • Where is the prior probability of class ck estimated by counting the number of training patterns • Classify pattern x as the class with the highest confidence. CS and EE Department UMKC
Results Training Input Data CS and EE Department UMKC
Results One Gaussian Correctness Two Gaussian Correctness True label CS and EE Department UMKC
Thanks for your patience ! CS and EE Department UMKC