820 likes | 1.14k Views
Computer Vision: Vision and Modeling. Computer Vision: Vision and Modeling. Lucas-Kanade Extensions Support Maps / Layers: Robust Norm, Layered Motion, Background Subtraction, Color Layers Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5)
E N D
Computer Vision: Vision and Modeling • Lucas-Kanade Extensions • Support Maps / Layers: • Robust Norm, Layered Motion, Background Subtraction, Color Layers • Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5) • - Bayesian Decision Theory • - Density Estimation
A Different View of Lucas-Kanade 2 E = S( ) = I (i) - I(i) v D i t i 2 I (1) - I(1) v D 1 t High Gradient has Higher weight I (2) - I(2) v D 2 t ... D I (n) - I(n) v n t White board
Constrain - V V Constrained Optimization 2 I (1) - I(1) v D 1 t I (2) - I(2) v D 2 t ... D I (n) - I(n) v n t
Constraints = Subspaces Constrain - V V E(V) Analytically derived: Affine / Twist/Exponential Map Learned: Linear/non-linear Sub-Spaces
Motion Constraints • Optical Flow: local constraints • Region Layers: rigid/affine constraints • Articulated: kinematic chain constraints • Nonrigid: implicit /learned constraints
Constrained Function Minimization Constrain - V V 2 I (1) - I(1) v D 1 t V= M( q) = E(V) I (2) - I(2) v D 2 t ... D I (n) - I(n) v n t
2D Translation: Lucas-Kanade 2D Constrain - V V 2 I (1) - I(1) v D dx, dy 1 t V= = E(V) dx, dy I (2) - I(2) v D 2 t ... ... dx, dy D I (n) - I(n) v n t
a1, a2 a3, a4 2D Affine: Bergen et al, Shi-Tomasi 6D Constrain - V V 2 I (1) - I(1) v D 1 t x dx v = = E(V) i + I (2) - I(2) v D 2 i y dy t i ... D I (n) - I(n) v n t
Affine Extension • Affine Motion Model: • 2D Translation • 2D Rotation • Scale in X / Y • Shear Matlab demo ->
Affine Extension Affine Motion Model -> Lucas-Kanade: Matlab demo ->
2D Affine: Bergen et al, Shi-Tomasi 6D Constrain - V V
K-DOF Models K-DOF Constrain - V V 2 I (1) - I(1) v D 1 t V= M( q) = E(V) I (2) - I(2) v D 2 t ... D I (n) - I(n) v n t
Quadratic Error Norm (SSD) ??? Constrain - V V 2 I (1) - I(1) v D 1 t V= M( q) = E(V) I (2) - I(2) v D 2 t ... D I (n) - I(n) v n t White board (outliers?)
Support Maps / Layers • L2 Norm vs Robust Norm • Dangers of least square fitting: L2 D
Support Maps / Layers • L2 Norm vs Robust Norm • Dangers of least square fitting: L2 robust D D
Support Maps / Layers • Robust Norm -- good for outliers • nonlinear optimization robust D
Support Maps / Layers • Iterative Technique Add weights to each pixel eq (white board)
Support Maps / Layers • how to compute weights ? • -> previous iteration: how good does G-warp matches F ? • -> probabilistic distance: Gaussian:
Error Norms / Optimization Techniques SSD: Lucas-Kanade (1981) Newton-Raphson SSD: Bergen-et al. (1992) Coarse-to-Fine SSD: Shi-Tomasi (1994) Good Features Robust Norm: Jepson-Black (1993) EM Robust Norm: Ayer-Sawhney (1995) EM + MRF MAP: Weiss-Adelson (1996) EM + MRF ML/MAP: Bregler-Malik (1998) Twists / EM ML/MAP: Irani (+Ananadan) (2000) SVD
Computer Vision: Vision and Modeling • Lucas-Kanade Extensions • Support Maps / Layers: • Robust Norm, Layered Motion, Background Subtraction, Color Layers • Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5) • - Bayesian Decision Theory • - Density Estimation
Support Maps / Layers • Black-Jepson-95
Support Maps / Layers • More General: Layered Motion (Jepson/Black, Weiss/Adelson, …)
Support Maps / Layers • Special Cases of Layered Motion: • - Background substraction • - Outlier rejection (== robust norm) • - Simplest Case: Each Layer has uniform color
Support Maps / Layers • Color Layers: P(skin | F(x,y))
Computer Vision: Vision and Modeling • Lucas-Kanade Extensions • Support Maps / Layers: • Robust Norm, Layered Motion, Background Subtraction, Color Layers • Statistical Models (Duda+Hart+Stork: Chap. 1-5) • - Bayesian Decision Theory • - Density Estimation
Statistical Models / Probability Theory • Statistical Models: Represent Uncertainty and Variability • Probability Theory: Proper mechanism for Uncertainty • Basic Facts White Board
General Performance Criteria Optimal Bayes With Applications to Classification
Bayes Decision Theory Example: Character Recognition: Goal: Classify new character in a way as to minimize probability of misclassification
Bayes Decision Theory • 1st Concept: Priors ? P(a)=0.75 P(b)=0.25 a a b a b a a b a b a a a a b a a b a a b a a a a b b a b a b a a b a a
Bayes Decision Theory • 2nd Concept: Conditional Probability # black pixel # black pixel
Bayes Decision Theory • Example: X=7
Bayes Decision Theory • Example: X=8
Bayes Decision Theory • Example: Well… P(a)=0.75 P(b)=0.25 X=8
Bayes Decision Theory • Example: P(a)=0.75 P(b)=0.25 X=9
Bayes Decision Theory • Bayes Theorem:
Bayes Decision Theory • Bayes Theorem:
Bayes Decision Theory • Bayes Theorem: Likelihood x prior Posterior = Normalization factor
Bayes Decision Theory • Example:
Bayes Decision Theory • Example:
Bayes Decision Theory • Example: X>8 class b
Bayes Decision Theory Goal: Classify new character in a way as to minimize probability of misclassification Decision boundaries:
Bayes Decision Theory Goal: Classify new character in a way as to minimize probability of misclassification Decision boundaries:
Bayes Decision Theory Decision Regions: R3 R1 R2
Bayes Decision Theory Goal:minimize probability of misclassification
Bayes Decision Theory Goal:minimize probability of misclassification
Bayes Decision Theory Goal:minimize probability of misclassification
Bayes Decision Theory Goal:minimize probability of misclassification
Bayes Decision Theory Discriminant functions: • class membership solely based on relative sizes • Reformulate classification process in terms of discriminant functions: x is assigned toCk if
Bayes Decision Theory Discriminant function examples: