640 likes | 797 Views
Segmentation In The Field Medicine. Advanced Image Processing course By: Ibrahim Jubran Presented To: Prof. Hagit Hel-Or. What we will go through today. A little inspiration. Medical image segmentation methods: - Deformable Models. - Markov Random Fields. Results.
E N D
Segmentation In The Field Medicine Advanced Image Processing course By: Ibrahim Jubran Presented To: Prof. Hagit Hel-Or
What we will go through today • A little inspiration. • Medical image segmentation methods: - Deformable Models. - Markov Random Fields. • Results.
Why Let A Human Do It, When The Computer Does It Better? • “Image data is of immense practical importance in medical informatics.” • For instance: CAT, MRI, CT, X-Ray, Ultrasound.All represented as images, and as images, they can be processed to extract meaningful information such as: volume, shape, motion of organs, layers, or to detect any abnormalities.
Why Let A Human Do It, When The Computer Does It Better? Cont. • Here’s a task for you:Look at this image:could you manually mark the boundariesof the two abnormal regions?Answer: Maybe…
And… What if I told you to do it in 3D? Answer?You would probably fail badly.
Common Methods:Deformable Models • Deformable models are curves whose deformations are determined by the displacement of a discrete number of control points along the curve. • Advantage: usually very fast convergence, depending on the predetermined number of control points. • Disadvantage: Topology dependent: a model can capture only one ROI, therefore in images with multiple ROIs we need to initialize multiple models.
Deformable models • A widely used method in the medicine field is the Deformable Models, which is divided into two main categories:- The Parametric Deformable Models.- The Geometric Deformable Models. • We shall discuss each of them briefly.
Geometric Models • Geometric Models use a distance transformation to define the shape from the n-dimentional to an n+1-dimentional domain (where n=1 for curves, n=2 for surfaces on the image plane…)
Example of a transformation • Here you see a transformation from 1D to 2D.
Geometric Models cont. • Advantages: 1) The evolving interface can be described by a single function even if it consists of more than one curve. 2) The shape can be defined in a domain with dimensionality similar to the dataset space (for example, for 2D segmentation, a curve is transformed into a 2D surface) -> more mathematically straightforward integration of shape and appearance.
In Other Words… • We transform the n dimensional image into an n+1 dimensional image, then we try to find the best position for a “plane” , called the “zero level set”, to be in. • We start from the highest point and descend, until the change in the gradient is below a predefined threshold.
And Formally… • The distance function: • g is the speed function, C is our zero level set • C’ forces the boundaries to be smooth.
Geometric Deformable Models Short demonstration Click to watch a demonstration of the MRF
Parametric Models • Also known as “Active contours”, or Snakes.Sounds familiar? • The following slides are taken from Saar Arbel’s presentation about Snakes. Five instances of the evolution of a region based deformable model
What is a snake? A framework for drawing an object outline from a possibly noisy 2D image. An energy-minimizing curve guided by external constraint forces and influenced by image forces that pull it towards features (lines, edges). Represents an object boundary or some other salient image feature as a parametric curve
Every snake includes: External Energy Function Internal Energy Function A set of k points (in the discreet world) or a continuous function that will represent the points
So... Why snakes? Snakes are autonomous and self-adapting in their search for a minimal energy state They can be easily manipulated using external image forces They can be used to track dynamic objects in temporal as well as the spatial dimensions
Common Methods:Learned Based Classification • Learning based pixel and region classification is among the popular approaches for image segmentation. • Those methods use the advantages of supervised learning (training from examples) to assign a probability for each image site of belonging to the region of interest (ROI).
The MRF & The Cartoon Model A cartoon model
The Markov Random Field • The name “Markov Random Field” might sound like a hard and scary subject at first… I thought so too when I started reading about it… • Unfortunately I still do.
An unrelated photo of Homer Simpson • Click to watch a demonstration of the MRF • https://www.youtube.com/watch?v=hfOfAqLWo5c
The MRF & The Cartoon Model • The MRF uses a model called the “cartoon model”, which assumes that the “world” consists of regions where low level features change slowly, but across the boundaries these features change abruptly. • Our goal is: to find , a “cartoon”, which is a simplified version of the input image but with Labels attached to the regions.
& The Cartoon Model is modeled as a discrete random variable taking values in .
The Cartoon Model Cont. • The discontinuities between those regions form a curve (the contour). • (,) form a segmentation. • We will only focus on finding the best , because once is determined, can be easily obtained.
More Cartoon Model Examples labelled Original
The Probabilistic ApproachFor Finding The Model • For each possible segmentation / cartoon of the input image G we want to give a probability measure that describes how suitable the cartoon is, for this specific image. • Let be the set of all possible segmentations, Note that is finite!
The Probabilistic Approach cont. • Assumptions: in this approach we assume that we have 2 sets of variables:1) The observation random variables Y. ℱ Y ,the observation , represents the low- level features in the image.2) The hidden random variables X. The hidden entity Xrepresents the segmentation itself.
Observation and Hidden Variables Low level features,for example:
Defining the Parameters needed • First we need to define how well a segmentation fits the image features ℱ.P(| ℱ) – the image model. • We want every image to posses a set of properties. P() – the prior, tells us how well satisfies these properties.
Illustration Of P(| ℱ) original P(| ℱ) is high P(| ℱ) is low
Example • We want the regions to be more homogeneous. For example,In this image P()would be a large number
Example cont. But, In this image P() would be a very small number
Our Goal • Our goal is to maximize P(|ℱ), since the higher this probability is, the more suitable the segmentation fits the image features ℱ.
An unrelated photo of Homer Simpson (again) • Click to watch a demonstration of the MRF
A Lesson In Probability • As you might remember (probably not) from Probability lectures, • Since is constant for each image and so is dropped, therefor, we are looking for that maximizes the posterior.
Defining the Parameters needed Cont. • In addition to the probability distributions that we defined, our model also depend on certain parameters that we denote by . • In the supervised segmentation we assume these parameters are either known or that a training set is available. • In the unsupervised case, we will have to infer both and from the observable entity .
The MRF cont. • There are many features that one can take as observation for the segmentation process: gray-level, color, motion, different texture features, etc. • In our lesson we would be using a combination of classical, gray-level based, texture features and color, instead of direct modeling of color textures.
Feature extraction • For each pixel s, we define a vector , which represents the features at that pixel. • The set of all the feature vectors form a vector field:={ | s S}, S = { (pixels).And as you remember, is the Observation, and will be the input of the MRF segmentation algorithm.
Notes • REMINDER: our features will be texture and color. • We use the CIE-L*U*V color plane, so regions will be formed where both features are homogeneous while boundaries will be present where there is discontinuity in either color or texture.
CIE-L*u*v* VS. RGB CIELUV color histogram RGB color histogram
The Markov Random Field Segmentation Model • Lets start by defining : • We defined in a way that it represents the simple fact that segmentation should be locally homogeneous. Let’s call this SQUIRREL
Definitions • = The number of possible cartoons. • = The label of pixel s
And now… the FUN part !! • Don’t listen to me, just RUN!
The Image Process • We assume follows a normal distribution N(
The Image Process cont. • n = The dimension of our color-texture space. • = A pixel class. • = The mean vector (The average of all the feature vectors within the class ). • = The covariance matrix, which describes the correlation between each two features in a given class.