210 likes | 388 Views
Convolutional Decision Trees. Dmitry Laptev, Joachim M. Buhmann Machine Learning Lab, ETH Zurich. Connectomics *. * Reconstruction and study of Connectome : a map of neuron connections. Connectomics. Hard to automate On a whole-brain scale it is simply necessary Good techniques are:
E N D
Convolutional Decision Trees Dmitry Laptev, Joachim M. Buhmann Machine Learning Lab, ETH Zurich Dmitry Laptev
Connectomics* * Reconstruction and study of Connectome: a map of neuron connections Dmitry Laptev
Connectomics • Hard to automate • On a whole-brain scale it is simply necessary • Good techniques are: • Require huge training sets • Very slow to train • Infeasible in one CPU • Convolutional Decision Trees • Trade-off between quality and speed • Making Connectomics fast Dmitry Laptev
Other Computer Vision tasks Segmentation tasks usually start with computing per-pixel probabilities Dmitry Laptev
General pipeline Φ Random Forest Graph Cuts Φ is a set of features The problem is usually here Dmitry Laptev
Staticpredefinedfeatures • Generic features • HoG features, • Gaussian blur, • Sobel filters, • SIFT features • Domain-specifically designed • Line Filter Transform for blood vessel segmentation, • Context Cue features for synapse detection Dmitry Laptev
Feature learning + no feature design + data-specific features – not task-specific • Unsupervised feature learning • Bag of Visual Words, • Sparse Coding, • Autoencoders • Supervised features learning • Sparse coding (quadratic), • KernelBoost, • Convolutional Neural Networks (CNN) • Convolutional Decision Trees + task-specific features – either restricted class – or very slow to train Dmitry Laptev
Convolutional Decision Trees • Learn informative featuresonebyone • Treesplitis a convolutionwithsomekernel • Maximizethe relaxed informationgain • Introducesmoothnessregularization • Combine thesefeaturesto form a decisiontree • Grow the tree while the performance increases • Adjust the regularization while growing Dmitry Laptev
Relaxed information gain • Obliquedecisiontreesnotation: • - - vectorizedimagepatch • - vectorizedconvolutionkernel • Split predicate: • Relaxed splitpredicate: Dmitry Laptev
Relaxed information gain • The notationof Information Gainisalmostthe same... • ... butnowsmooth! Dmitry Laptev
Relaxed information gain Dmitry Laptev
Regularized information gain • SmoothnesstermГ... • ...makesthefunctional„moreconvex“ Dmitry Laptev
Regularized information gain Dmitry Laptev
Learning one informative feature CDT L-BFGS α := 2*α * we refer to the paper for all the details Dmitry Laptev
Examplesoflearnedfeatures Left: static generic features; right: features learned with CDT Features are interpretable: there are edge detectors, curvature detectors, etc. Dmitry Laptev
Results: Weizmann Horse dataset • (a): ground truth; (b) – (e): results for different tree depth • (top): per-pixel probabilities; (bottom): graph-cut segmentation Dmitry Laptev
Results: Weizmann Horse dataset CDT is better than any general local technique, but worse than task-specific Dmitry Laptev
Results: Drosophila VNC segmentation From left to right: results of anisotropic RF, CDT and CNN Dmitry Laptev
Results: Drosophila VNC segmentation A week in GPU, A year in CPU 10 hours in CPU CDT is 4.5% better than the second-best technique and only 2.2% worse than CNN CDT requires 10 hours training in 1 CPU, while CNN requires a week in GPU-cluster Dmitry Laptev
Summary • ConvolutionalDecisionTrees: • Much faster (over-nightexperiment) • Requirenospecialhardware • Trade-off betweenthespeedandthequality • Consistofthreemaincomponents: • Relaxed informationgain • Strictlyconvexregularizationterm • Iterative optimizationalgorithm Dmitry Laptev
Thanks for your attention! Questions / ideasarewelcome Dmitry Laptev