420 likes | 564 Views
Exploring the Parameter Space of Image Segmentation Algorithms. Xiaoyi Jiang Department of Mathematics and Computer Science University of M ü nster Germany. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A. How to deal with parameters?.
E N D
Exploring the Parameter Space • of Image Segmentation Algorithms Xiaoyi Jiang Department of Mathematics and Computer Science University of Münster Germany TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA
How to deal with parameters? • Typical approaches: • Not consider the problem at all • “We have experimentally determined the parameter values ……“ • Supervised: training of parameter values based on training images with (manually specified) ground truth • Unsupervised: based on heuristics to measure segmentation quality
How to deal with parameters? • Drawbacks: • “We have experimentally determined ……“ Who believes that? • Supervised: training of parameter values based on GT GT not always available trained parameters not optimal for a particular image • Unsupervised: based on self-judgement heuristics still no good solution for self-judgement
How to deal with parameters? • Basic assumption: • Known reasonable range of good values for each parameter • Our intention: explore the parameter subspace without GT • A: investigate local behavior of parameters • B: adaptively compute an “optimal“ segmentation within a parameter subspace (construction approach) • C: adaptively select an “optimal“ parameter setting within a subspace (selection approach)
Natural landscape Quality measure optimal parameters p2 p1
A. Investigate local behavior of parameters • Belief: • There is a subspace of good parameter values • Reality: • Yes, but there are local outliers within such a subspace!
A. Investigate local behavior of parameters Felzenszwalb / Huttenlocher: Efficient graph-based image segmentation. Int. J. Computer Vision 59 (2004) 167–181
A. Investigate local behavior of parameters Close-up: NMI = 0.70 NMI = 0.26
A. Investigate local behavior of parameters Deng / Manjunath: Unsupervised segmentation of color-texture regions in images and video. IEEE T-PAMI 23 (2001) 800–810 (JSEG) NMI = 0.76 NMI = 0.61
A. Investigate local behavior of parameters Frequency study on Berkeley image set: Strong (weak) outliers = segmentation results with NMI lower than 15% (10%) of the maximum NMI of the current image ensemble (5x5 subspace) JSEG FH
A. Investigate local behavior of parameters • Danger: There are local outliers (salt-and-pepper noise)! • Solution: similar to median filtering • : Segmentations around some parameter setting • : distance function between segmentations • Set median:
A. Investigate local behavior of parameters • FH: best worst set median
B: Adaptively compute an “optimal“ segmentation • Belief: • There is a reasonable subspace of good parameter values. Some optimal parameter setting can be determined by experiments or training. • Reality: • Yes, but this parameter setting is not optimal for a particular image!
B: Adaptively compute an “optimal“ segmentation Exactly the same parameter set applied to two images
B: Adaptively compute an “optimal“ segmentation • Segmentation ensemble technique: • Use a sampled parameter subspace to compute an ensemble of segmentations • Compute a final segmentation based on S This combined segmentation tends to be a good one within the explored parameter subspace
Excursus: Random walker based segmentation unseeded (unlabeled) pixel seeded (labeled) pixels (a) A two-region image (b) Use-defined seeds for each region edge weight: similarity between two nodes, based on e.g., intensity gradient, color changes low-weight edge (sharp color gradient) (c) A 4-connected lattice topology (d) An undirected weighted graph 18 L.Grady: Random walks for image segmentation. IEEE-TPAMI, 28: 1768–1783, 2006
Excursus: Random walker based segmentation 0.03 0.10 0.15 0.85 0.90 0.97 0.97 0.90 0.85 0.15 0.10 0.03 0.03 0.15 0.85 0.97 0.97 0.85 0.15 0.03 0.03 0.10 0.15 0.85 0.90 0.97 0.97 0.90 0.85 0.15 0.10 0.03 Probability that a random walker starting from each unseeded node first reaches red seed Probability that a random walker starting from each unseeded node first reaches blue seed 19 The algorithm labels an unseeded pixel in following steps: Step 1. Calculate the probability that a random walker starting at an unseeded pixel x first reaches a seed with label s
Excursus: Random walker based segmentation (0.97,0.03) (0.90,0.10) (0.85,0.15) (0.15,0.85) (0.10,0.90) (0.03,0.97) (0.97,0.03) (0.85,0.15) (0.15,0.85) (0.03,0.97) A segmentation corresponding to region boundary is obtained by biasing the random walker to avoid crossing sharp color gradients (0.97,0.03) (0.90,0.10) (0.85,0.15) (0.15,0.85) (0.10,0.90) (0.03,0.97) 20 Step 2. Label each pixel with the most probable seed destination
Excursus: Random walker based segmentation Original Seeds indicating four objects Resulting segmentation Label 1 probabilities Label 2 probabilities Label 3 probabilities Label 4 probabilities
B: Adaptively compute an “optimal“ segmentation • Connection to random walker based segmentation: • The input segmentations provide strong hints about where to automatically place some seeds • Then, the same situation as image segmentation with manually specified seeds apply the random walker algorithm to achieve a final segmentation • Random walker based segmentation ensemble technique: • Generate a graph from input segmentations • Extract seed regions • Compute a final combined segmentation result
B: Adaptively compute an “optimal“ segmentation • Graph generation: • Weight eij in G: indicate how probably two pixels pi and pj belong to the same image region • Solution: Counting number nij of initial segmentations, in which pi and pj share the same region label. Then, we define the weight function as a Gaussian weighting: wij = exp [-β (1- nij /N)]
B: Adaptively compute an “optimal“ segmentation • Candidate seed region extraction: • We build a new graph G* by preserving those edges with weight wij = 1 only (pi and pj have the same label in all initial segmentations) and removing all other edges. Then, all connected subgraphs in G* build the initial seed regions. • Grouping candidate seed regions: • A reduction of seed regions is performed by iteratively merging the two closest candidate seed regions until some termination criterion (thresholding) is satisfied. • Optimization of K (number of seed regions): • Based onan approximation of generalized median segmentation by investigating the subspace consisting of the combination segmentations for all possible K 2 [Kmin,Kmax] only.
B: Adaptively compute an “optimal“ segmentation graph G initial seeds final result (optimal K)
B: Adaptively compute an “optimal“ segmentation worst / median / best input segmentation combination segmentation
B: Adaptively compute an “optimal“ segmentation Comparison (per image): Worst / best / average input & combination
B: Adaptively compute an “optimal“ segmentation f(n): Number of images for which the combination result is worse than the best n input segmentations Ensemble technique outperforms all 24 input segmentations in 78 cases. For 70% (210) of all 300 test images, the goodness of our solution is beaten by at most 5 input segmentations only.
B: Adaptively compute an “optimal“ segmentation Comparison: Average performance for all 300 test images (for each of 24 parameter settings)
B: Adaptively compute an “optimal“ segmentation Dream The dream must go on!
B: Adaptively compute an “optimal“ segmentation • Additional applications: • 2.5D range image segmentation • detect double contours by dynamic programming (layer of intima and adventitia for computing the intima-media thickness)
B: Adaptively compute an “optimal“ segmentation • Segmenter combination: • There exists no universal segmentation algorithm that can successfully segment all images. It is not easy to know the optimal algorithm for one particular image. • Instead of looking for the best segmenter which is hardly possible on a per-image basis, now we look for the best segmenter combiner. • Instead of looking for the best set of features and the best classifier, now we look for the best set of classifiers and then the best combination method. • Ho, 2002
C: Adaptively select an optimal parameter setting • Belief: • There are heuristics to measure segmentation quality • Reality: • Yes, but optimizing such heuristic do not necessarily correspond to segmentations perceived by humans!
C: Adaptively select an optimal parameter setting • Observations: • Different segmenters tend to produce similar good segmentations, but dissimilar bad segmentations • (The subspace of bad segmentations is substantially larger • than the subspace of good segmentations) • Compare segmentation results of different segmenters and figure out good segmentations by means of similarity tests
C: Adaptively select an optimal parameter setting • Outline of the framework: • Compute for each segmentation algorithm N segmentations • Compute an N × N similarity matrix by comparing each segmentation of the first algorithm with each segmentation of the second algorithm • Determine the best parameter setting from the similarity matrix
C: Adaptively select an optimal parameter setting Weaker segmenter CSC benefits from stronger FH/JSEG
C: Adaptively select an optimal parameter setting Also FH benefits from weaker CSC
C: Adaptively select an optimal parameter setting Also JSEG benefits from weaker CSC
Conclusions • Basic assumption: • Known reasonable range of good values for each parameter • Our intention: Explore the parameter subspace without GT • A: investigate local behavior of parameters • B: adaptively compute an “optimal“ segmentation within • a parameter subspace • C: adaptively select an optimal parameter setting within • a subspace on a per image basis
Conclusions • We could demonstrate: • A: Local outliers can be successfully removed by set median operator • B: The combination performance tends to reach the best input segmentation; in some cases the combined segmentation even outperforms the entire input ensemble • C: Segmenters can help each other for selecting good parameter values
Conclusions • Combination (ensemble) techniques: • Generalized median: Strings, graphs, clusterings, … • Multiple classifier systems • …… • Combining image segmentations Three cobblers combined equal the master mind - Chinese proverb - gracias