770 likes | 948 Views
Background Subtraction based on Cooccurrence of Image Variations. Seki, Wada, Fujiwara & Sumi - 2003 Presented by: Alon Pakash & Gilad Karni. Motivation. Detecting foreground objects in dynamic scenes involving swaying trees and fluttering flags. Dynamic Scenes.
E N D
Background Subtractionbased onCooccurrence of Image Variations Seki, Wada, Fujiwara & Sumi - 2003 Presented by: Alon Pakash & Gilad Karni
Motivation Detecting foreground objects in dynamic scenes involving swaying trees and fluttering flags.
Background Subtraction so far: Dynamic update of the background model Stationary background Permissible range of image variation Cooccurrence…
Permissible range of image variation Feature Space Background model Input image (as vector) Chosen Pixels, DCT coefficients, …
The Problem: Training set Background model + VARIANCE BIG variance = Detection sensitivity decreases!
The Solution: Dynamically narrow the permissible range… By using the Cooccurrence.
“Cooccurrence” • What is Cooccurrence? Image variations at neighboring image blocks have strong correlation!
Permissible range with Cooccurrence Background model without considering cooccurrence Feature Space Cooccurrence DB of background image variations Narrowed background model Input image (as vector)
Cooccurrence“Is it really that good?” • Partition the image: NxN Blocks • In time t, block u is represented by: i(u,t)
Illustrating Principal Components Analysis Our Goal:Revealing the internal structure of the data in a way which best explains the variance in the data
N x N 1 x N2
e2 e1 1 x N2 Projection
Another Example:Tree sway Block A Block B
Cooccurrence – Cont’d • Also stands for: • Higher dimension feature space • Other neighboring blocks in the picture • Fluttering flags • Conclusion: Neighboring image blocks have strong correlation!
Background Subtraction Method The general idea: Narrow the background image variations by estimating the background image in each block from the neighboring blocks in the input image
(B,t1) (A,t1) ZB (A,t2) Z* (B,t2) ZA (B,t3) (A,t3) e2 e2 e1 e1 e3 e3
ZB Z* e2 Z(B,t1) Z(B,t2) Z(B,t3) e1 e3
Advantages • Since the method utilizes the spatial property of background image variations, it is not affected by the quick image variations. • The method can be applied not only to the background object motions, such as swaying tree leaves, but also to illumination variations.
The experiment procedure • Number of dimensions? • Number of neighbors?
Num. of Dimensions • Determination of the dimensions of the eigen space: until more than 90% of the blocks are “effective”.
ZB Z* Num. of neighbors • Determination of the number of neighbors: until the error (the Euclidean distance in the eigen space) is small enough. Z(B,t1) Z(B,t2) Z(B,t3)
Comparison to other methods • Method 1: Learning in the same features space for each block, background subtraction using Mahalanobis distances. • Method 2: Doesn’t use “Cooccurence”, relies only on the input pattern in the focused block. • Method 3: The proposed method.