180 likes | 289 Views
Learning Based Hierarchical Vessel Segmentation Presenter: Richard Socher www.socher.org Authors: Richard Socher Adrian Barbu Dorin Comaniciu. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A. Overview. Background
E N D
Learning Based Hierarchical Vessel Segmentation • Presenter: • Richard Socher www.socher.org • Authors: • Richard Socher • Adrian Barbu • Dorin Comaniciu TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAA
Overview • Background • Machine Learning • Marginal Space Learning • Probabilistic Boosting Trees • Visual Features • Haar and Steerable Features • HierarchicalVesselSegmentation • Resultsand Future Work
Marginal Space Learning • Fewer parameters have to be examined • Large speed-ups General Framework which tackles problem of high dimensional parameter spaces Posterior distribution of the parameters lies in a small region of the n-dimensional parameter space Idea: Start in small marginal spaces and increase dimensionality of the search space
Marginal SpacesofVessels • Marginal Space 1:Gradient Candidates • Marginal Space 2: Cross Segments • Marginal Space 3:Quadrilaterals
Machine Learning: Probabilistic Boosting Trees Each node is a strong boosting classifier: Transform into probability: During training, samples are divided into subnodes During testing, the top recursively collects the probabilities:
Visual Features • Sample features and use them as input to classifier • Haar Features • Thousands of cheap features through integral image: • Steerable Features • Useful for finding the orientation and scale of an object, given its location • Intensity, gradient,…
HierarchicalVessel Segmentation • 2. Cross-Segment Detection: Width • 3. Quadrilateral Detection: Length • 4. Dynamic Programming 1. Learning Based Edge Detection
Level 1 – Learning Based Edge Detection Samples with gradient direction (gy,-gx) Original Frame Large Gradients Goal: Rough estimation of vessel borders Candidates are pixels with large gradient Annotation used to create positive and negative samples for the PBT learning Fast and little limitation for higher levels
Level 2: Cross Segment Detection Goal: cross segments, loosely corresponding to width of vessel Candidates are created by going in opposite gradient direction from all locations of Level 1, until another point from Level 1 is hit Segments and their Haar Features are given to a PBT
Level 3: QuadrilateralDetection • Goal: find pairs of cross segments that, if connected as a quadrilateral, capture an area of the vessel. • The probability of such a quadrilateral shows how likely two cross segments are connected in the complete vessel. • Steerable Features are sampled and used for training a PBT: • Gradient, grey value; probability map of Level 1, differences in grey value Coordinate System for steerable Features Positive and Negative Training Samples
Level 4: Dynamic Programming Goal: Final vessel segmentation, the most likely connection of cross segments Formulation as lowest cost path in weighted graph G = (V,E)V = cross-segments, E = edges between segments, if they form a quadrilateralWeight(e(v1,v2)) = log((1-p)/p) p = P(Quadrialateral(v1,v2)) Solved by dynamic programming
Results on Testing Set Training on 134 frames Testing on 64 frames Detection Rate: 90.1% False Alarm 29% Example of distracting side vessel
Future Work Extension to full vessel tree through a junction point detector
Conclusion • Hierarchical learning based vessel segmentation method • highly driven by data: applicable to any tube like structure • generalizes well to lower quality X-ray images. • New representation of a vessel consisting of three marginal spaces: border points, vessel width and vessel pieces (quadrilaterals) • Novel use of MSL and steerable features in segmentation of objects without a mean shape. • Results for single vessel segmentation are preliminary but promising • Work in progress: time consistency, full tree, … • More details in my thesis: www.socher.org
Thankyou! Questions? • 1. Learning Based Edge Detection • 2. Cross-Segment Detection: Width • 3. Quadrilateral Detection: Length • 4. Dynamic Programming
References Friedman, J. H., Hastie, T. and Tibshirani, R., "Additive Logistic Regression: a Statistical View of Boosting." (Aug. 1998) A. Torralba, K. P. Murphy and W. T. Freeman. (2004). "Sharing features: efficient boosting procedures for multiclass object detection". Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). Pp 762- 769. http://people.csail.mit.edu/torralba/shortCourseRLOC/boosting/boosting.html T. Zhang, "Convex Risk Minimization", Annals of Statistics, 2004. Zhuowen Tu, “Probabilistic boosting-tree: Learning discriminative models for classification, recognition, and clustering.,” in ICCV, 2005, pp. 1589–1596. http://www.stat.ucla.edu/~ztu/publication/tu_z_pbt.pdf Y. Zheng, A. Barbu, B. Georgescu, M. Scheuering, and D. Comaniciu, “Fast automatic heart chamber segmentation from 3d ct data using marginal space learning and steerable features,” in IEEE Int’l Conf. Computer Vision (ICCV’07), Rio de Janeiro, Brazil, 2007. http://www.caip.rutgers.edu/~comanici/Papers/HeartSegmentation_ICCV07.pdf P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” 2001. http://research.microsoft.com/~viola/Pubs/Detect/violaJones_CVPR2001.pdf