390 likes | 650 Views
2/38. Contents. Image analysis
E N D
1. 1/38 Signal- und Bildverarbeitung, 323.014silently converted to: Image Analysis and ProcessingArjan Kuijper05.10.2006 Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences Altenbergerstrae 56A-4040 Linz, Austria
arjan.kuijper@oeaw.ac.at
2. 2/38 Contents Image analysis & processing deals with the investigation of images and the application of specific tasks on them, like enhancement, denoising, deblurring, and segmentation.
Mathematical methods that are commonly used are presented and discussed.
The focus will be on the axiomatic choice for the models, their mathematical properties, and their practical use.
3. 3/38 Image analysis & processing As image analysis and processing is a mixture of several disciplines, like physics, mathematics, vision, computer science, and engineering, this course is aimed at a broad audience.
Only basic knowledge of analysis is assumed and necessary mathematical tools will be outlined during the meetings.
4. 4/38 Some key words Images & Observations:
Scale space, regularization, distributions.
Filtering:
Edge detection, enhancement, Wiener, Fourier, Sobel, Canny,
Objects:
Differential structure, invariants, feature detection
Deep structure:
Catastrophes & Multi-scale Hierarchy
Variational Methods & Partial Differential Methods:
Perona Malik, Anisotropic Diffusion, Total Variation, Mumford-Shah.
Curve Evolution:
Normal Motion, Mean Curvature Motion, Euclidian Shortening Flow.
5. 5/38 Contents 05.10.2006 : Introduction, Axioms
12.10.2006 : Gaussian kernel
19.10.2006 : Derivatives
09.11.2006 : Differential structure, invariants
16.11.2006 : Deep structure
23.11.2006 : Perona Malik
30.11.2006 : Total Variation
07.12.2006 : Mean Curvature Motion
14.12.2006 : Mumford Shah
11.01.2007 : presentation
18.01.2007 : presentation
25.01.2007 : presentation
6. 6/38 Examination Investigation and public presentation of recent work in image analysis provided at the course:
Front-End Vision and Multi-scale Image Analysis, B. M. ter Haar RomenyKluwer Academic Publishers, 2003.
Chapter 17: Optic Flow
Chapter 18: Color Differential Structure
Chapter 19: Steerable kernels
Handbook of Mathematical Models in Computer Vision, Edited by N. Paragios, Y. Chen and O. FaugerasSpringer, 2005
Chapter 1: Diffusion Filters and Wavelets
Chapter 2: Total Variation Image Restoration
Chapter 3: PDE-Based Image and Surface Inpainting
An oral exam on contents of the course.
7. 7/38 05.10.2006 : Introduction, Axioms
8. 8/38 Introduction Apertures and the notion of scale
Observations and the size of apertures
Mathematics, physics, and vision
We blur by looking
A critical view on observations
Taken from B. M. ter Haar Romeny, Front-End Vision and Multi-scale Image Analysis, Dordrecht, Kluwer Academic Publishers, 2003.Chapter 1
9. 9/38 Observations and the size of apertures What is a cloud?
Observations are always done by integrating some physical property with a measurement device.
10. 10/38 Measurements A typical image:
11. 11/38 Mathematics, physics, and vision Observations: math vs. physics
Objects have a size.
Points dont exist in reality.
Objects live on a range of various sizes.
They contain several scales.
Objects are measured by some device.
Cameras, the eye,
Devices are finite.
They have a minimum and a maximum detection range: the inner and outer scale. They determine the spatial resolution.
The device measures an hierarchy of structures.
12. 12/38 From Wikipedia: Powers of Ten Powers of Ten is a 1977 short documentary film which depicts the relative scale of the Universe in factors of ten (see also logarithmic scale and order of magnitude). It was written and directed by Charles and Ray Eames. The idea for the film appears to have come from the 1957 book Cosmic View by Kees Boeke.
13. 13/38 We blur by looking
14. 14/38 The visual system We see multi-scale:
The images only contain two values (black and white).
We regards them as grey level images, or see structure.
15. 15/38 A critical view on observations Infinite resolution is impossible.
We cannot measure at infinite resolution.
Take uncommitted observations
There is no bias, no knowledge, no memory.We know nothing.
At least, at the first stage. Refine later on.
Allow different scales.
Theres more than just pixels .
View all scales.
There is no preferred size.
Noise is part of the measurement.
In a measurement noise can only be separated from the observation if we have a model of the structures in the image, a model of the noise, or a model of both.
16. 16/38 Spurious resolution Dont trust the grid.
17. 17/38 You dont see what you see
18. 18/38 Summary Observations are necessarily done through a finite aperture.
Making this aperture infinitesimally small is not a physical reality.
The size of the aperture determines a hierarchy of structures, which occur naturally in (natural) images.
The visual system exploits a wide range of such observation apertures in the front-end simultaneously, in order to capture the information at all scales.
Observed noise is part of the observation.
There is no way to separate the noise from the data if a model of the data, a model of the noise or a model of both is absent.
The aperture cannot take any form.
An example of a wrong aperture is the square pixel so often used when zooming in on images.
Such a representation gives rise to edges that were never present in the original image. This artificial extra information is called 'spurious resolution'.
19. 19/38 05.10.2006 : Introduction, Axioms (Lets have a short break first)
(what about the official Powers of Ten movie?)
20. 20/38 Axioms Foundations of scale space
Constraints for an uncommitted front-end
Axioms of a visual front-end
Axiomatic derivation of the Gaussian kernel
Scale space from causality
Scale space from entropy maximization
Derivatives of sampled, observed data
Scale space stack
Taken from B. M. ter Haar Romeny, Front-End Vision and Multi-scale Image Analysis, Dordrecht, Kluwer Academic Publishers, 2003.Chapter 2
21. 21/38 Constraints for an uncommitted front-end
22. 22/38 Axioms of a visual front-end
Uncommitted assumptions:
scale invariance (no preferred scale or size)
spatial shift invariance (no preferred location)
isotropy (no preferred orientation)
linearity (no memory or model)
separability (for the sake of computational ease)
23. 23/38 Axioms of a visual front-end Physical properties:
L [candela/meter2] <> x [meters]
Intensity <> spatiality
Pi-teorem:
Physical laws must be independent of the choice of the fundamental parameters
1. Scale invariance
L/ L0 = G(? ?)
24. 24/38 An uncommitted front-end 2. Linear shift invariance
Convolution :
In Fourier domain equal to multiplication:
3. Isotropy
Consider the length of
4. Linearity
Which implies
25. 25/38 An uncommitted front-end 5. Separability
p = 2
Outer scale (image averages)
So ?2 < 0, say -1/2 for later convenience.
26. 26/38 An uncommitted front-end Back to the spatial domain, normalizing the kernel:
27. 27/38 Scale space from causality Whatever you do on this image, you dont want the introduction of white regions in the black ones.
No new level lines are to be created:
Causality
28. 28/38 Scale space from causality Causality: non-enhancement of local extrema.
Let DL = Lxx + Lyy DL equals the sum of the eigenvalues of the Hessian.
Then at a maximum DL < 0 and Lt < 0 and at a minimum DL > 0 and Lt > 0
So DL Lt > 0.
Choose Lt = a DL, a > 0With a = 1, Lt = DL
29. 29/38 Scale space from causality Lt (x,y;t) = DL (x,y;t)
Obviously, for t -> 0, L (x,y;t) = L0
The general solution (Greens function) for this diffusion equation is convolution of the original image with an Gaussian:
G(x,y;t) = Exp (-(x2+y2 )/(4 t)) / (4 p t)
Note: one uses rater 4t than 2s2
30. 30/38 Scale space from entropy maximization A statistical measure for the disorder of the filter is given by the entropy:
1D for simplicity
If it is maximized it states something like there is nothing ordered (we know nothing).
Obviously, there are some constraints.
31. 31/38 Constraints
The function must be normalized; no global enhancement:
The mean of the measurement is at the location where we measure, say 0:
There is a standard deviation, say ?:
The function is positive; its a real object: g(x)>0
Scale space from entropy maximization
32. 32/38 Scale space from entropy maximization Maximize the Euler Lagrange equation
Set the variational derivative w.r.t. g(x) equal to zero:
So
33. 33/38 Scale space from entropy maximization
? x g(x) dx = 0 -> ?2 = 0
? x2 g(x) dx = ?2 -> ?3 = -1/(2 ?2)
g(x)>0 -> OK
? g(x) dx = 1 -> ?1 = Log[e/v(2p?2)]
=> g(x) = Exp[-1+1-Log[v(2p?2)]- x2 /(2 ?2)]
= Exp[-x2 /(2 ?2)] / v(2p?2)
34. 34/38 Derivatives of sampled, observed data The Gaussian kernel and all of its partial derivatives form the unique set of kernels for a front-end visual system that satisfies the constraints:
no preference for location, scale and orientation, and linearity.
It is a one-parameter family of kernels, where the scale is the free parameter.
The derivative of the observed data is given bywhich equals
35. 35/38 Derivatives of sampled, observed data Derivatives of a Gaussian:
The first order derivative of an image gives edges
36. 36/38 Gaussian scale space L(x;?) = L0(x) * Exp (- x2/(2 ?2) / Sqrt[ (2 p ?2) D ]
L(x; ?) is called the Gaussian scale space image.
37. 37/38 Summary We have specific physical constraints for the early vision front-end kernel.
We are able to set up a 'first principle' framework from which the exact sensitivity function of the measurement aperture can be derived.
There exist many such derivations for an uncommitted kernel, all leading to the same unique result: the Gaussian kernel.
The assumptions of linearity, isoptropy, homogeneity and scale-invariance;
The principle of causality;
Minimization of the entropy
Differentiation of discrete data is done by the convolution with the derivative of the observation kernel.
This means that differentiation can never be done without blurring the data somewhat.
38. 38/38 Next Week The Gaussian kernel
The Gaussian kernel
Normalization
Cascade property, self similarity
The scale parameter
Relation to generalized functions
Separability
Relation to binomial coefficients
The Fourier transform of the Gaussian kernel
Central limit theorem
Anisotropy
The diffusion equation
Differentiation and regularization
Regularization
Regular tempered distributions and testfunctions
An example of regularization
Relation regularization - Gaussian scale-space
39. 39/38 Powers of Ten revisited
Short popularized version