660 likes | 679 Views
Obtain compact image/motion representations supporting various applications. Theory overview absent; draws from R. Collins' lecture notes and FP Chapter 17. Models moving objects with underlying states. Linear dynamic models, Kalman filter explained step-by-step. Forward-backward filters, data association discussed.
E N D
EECS 274 Computer Vision Tracking
Tracking • Motivation: Obtain a compact representation from an image/motion sequence/set of tokens • Should support application • Broad theory is absent at present • Some slides from R. Collins lecture notes • Reading: FP Chapter 17
Tracking • Very general model: • We assume there are moving objects, which have an underlying state X • There are measurements Y, some of which are functions of this state • There is a clock • at each tick, the state changes • at each tick, we get a new observation • Examples • object is ball, state is 3D position+velocity, measurements are stereo pairs • object is person, state is body configuration, measurements are frames, clock is in camera (30 fps)
Tracking as induction • Assume data association is done • we’ll talk about this later; a dangerous assumption • Do correction for the 0’th frame • Assume we have corrected estimate for i’th frame • show we can do prediction for i+1, correction for i+1
… X1 X2 Xt Y1 Y2 Yt General model for tracking • The moving object of interest is characterized by an underlying state X • State X gives rise to measurements or observations Y • At each time t, the state changes to Xt and we get a new observation Yt Can be better explained with graphical model
Induction step Given
Linear dynamic models • Use notation ~ to mean “has the pdf of”, N(a, b) is a normal distribution with mean a and covariance b. • Then a linear dynamic model has the form • This is much, much more general than it looks, and extremely powerful D: transition matrix M: observation (measurement) matrix
Examples • Drifting points • we assume that the new position of the point is the old one, plus noise. • For the measurement model, we may not need to observe the whole state of the object • e.g. a point moving in 3D, at the 3k’th tick we see x, 3k+1’th tick we see y, 3k+2’th tick we see z • in this case, we can still make decent estimates of all three coordinates at each tick. • This property, which does not apply to every model, is called Observability
Examples • Points moving with constant velocity • Periodic motion • etc. • Points moving with constant acceleration
Moving with constant velocity • We have • (the Greek letters denote noise terms) • Stack (u, v) into a single state vector, xi • which is the form we had above
Moving with constant acceleration • We have • (the Greek letters denote noise terms) • Stack (u, v) into a single state vector • which is the form we had above
Constant velocity dynamic model velocity 1st component of state position state position time measurement and 1st component of state time
Constant acceleration dynamic model velocity position time position
Kalman filter • Key ideas: • Linear models interact uniquely well with Gaussian noise • make the prior Gaussian, everything else Gaussian and the calculations are easy • Gaussians are really easy to represent • once you know the mean and covariance, you’re done
Kalman filter in 1D • Dynamic Model • Notation Predicted mean Corrected mean Before/after i-th measurement
Prediction for 1D Kalman filter • Becausethe new state is obtained by • multiplying old state by known constant • adding zero-mean noise • Therefore, predicted mean for new state is • constant times mean for old state • Predicted variance is • sum of constant^2 times old state variance and noise variance • old state is normal random variable, multiplying normal rv by constant implies mean is multiplied by a constant variance by square of constant, adding zero mean noise adds zero to the mean, adding rv’s adds variance
Correction for 1D Kalman filter • Pattern match to identities given in book • basically, guess the integrals, get: • Notice: • if measurement noise is small, we rely mainly on the measurement, • if it’s large, mainly on the prediction
Kalman filter: general case In higher dimensions, derivation follows the same lines, but isn’t as easy. Expressions here.
the measurement standard deviation is small, so the state estimates are rather good
Smoothing • Idea • We don’t have the best estimate of state - what about the future? • Run two filters, one moving forward, the other backward in time. • Now combine state estimates • The crucial point here is that we can obtain a smoothed estimate by viewing the backward filter’s prediction as yet another measurement for the forward filter • so we’ve already done the equations
Data association • Also known as correspondence problem • Given features detectors (measurements) are not perfect, how can one find • correspondence between measurements and features in the model?
Data association • Determine which measurements are informative (as not every measurement conveys the same amount of information) • Nearest neighbors • choose the measurement with highest probability given predicted state • popular, but can lead to catastrophe • Probabilistic Data Association (PDA) • combine measurements, weighting by probability given predicted state • gate using predicted state
Data association Constant velocity model measurements are plotted with standard deviation (dash line)
Data association with NN • The blue track represents the actual measurements of state, and red tracks are noise • The noise is all over the place, and the gate around each prediction • typically contains only one measurement (the right one). • This means that we can track rather well ( the blue track is very largely obscured by the overlaid measurements of state) • Nearest neighbors (NN) pick best measurements (consistent with predictions) work well in this case • Occasional mis-identifcaiton may not cause problems
Data association with NN • If the dynamic model is not sufficiently constrained, choosing the measurement that best may lead to failure
Data association with NN • But actually the tracker loses tracks • These problems occur because error can accumulate • it is now relatively easy to continue tracking the wrong point for a long time, • and the longer we do this the less chance there is of recovering the right point
Probabilistic data association • Instead of choosing the region most like the predicted measurement, • we can exclude all regions that are too different and then use the others, • weighting them according to their similarity to prediction superscipt indicates the region
Nonlinear dynamics • As one repeats iterations of this function --- there isn’t any • noise --- points move towards or away from points where sin(x)=0. • This means that if one has • a quite simple probability distribution on x_0, one can end up with a very complex distribution • (many tiny peaks) quite quickly. • Most easily demonstrated by running particles through this • dynamics as in the next slide.
Nonlinear dynamics • Top figure shows tracks of particle position against iteration for the dynamics of the previous • slide, where the particle position was normal in the initial configuration (first graph at the bottom) • As the succeeding graphs show, the number of particles near a point (= histogram = p(x_n)) • very quickly gets complex, with many tiny peaks.
Factored sampling • Represent the state distribution non-parametrically • Prediction: Sample points from prior density for the state, p(x) • Correction: Weight the samples according to p(y|x) Representing p(y|x) in terms of {st-1(n), πt-1(n) }
Particle filtering • We want to use sampling to propagate densities over time(i.e., across frames in a video sequence) • At each time step, represent posterior p(xt|yt) with weighted sample set • Previous time step’s sample set p(xt-1|yt-1) is passed to next time step as the effective prior
Particle filtering Start with weighted samples from previous time step Sample and shift according to dynamics model Spread due to randomness; this is predicted density p(xt|yt-1) Weight the samples according to observation density Arrive at corrected density estimate p(xt|yt) M. Isard and A. Blake, “CONDENSATION -- conditional density propagation for visual tracking”, IJCV 29(1):5-28, 1998
Dynamic model p(xt|xt-1) • Can learn from examples using ARMA or LDS • Can be modeled with random walk • Often factorize xt