1 / 21

G-SURF Mid-Presentation

G-SURF Mid-Presentation. << About what I’ve studied about Filters and its implementation >>. ( Focused on Tracking & Detection Technology ). Presentation Date : 2014.07.29 Presented by : Kyuewang Lee in CVL Laboratory, GIST. We are talking about filters in “Signal Processing”

lynnea
Download Presentation

G-SURF Mid-Presentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. G-SURF Mid-Presentation << About what I’ve studied about Filters and its implementation >> ( Focused on Tracking & Detection Technology ) Presentation Date : 2014.07.29 Presented by : Kyuewang Lee in CVL Laboratory, GIST

  2. We are talking about filters in “Signal Processing” • It is a device that removes some components or features from a signal • We can classify filters in various aspects and these classifying properties are related to the property of the signal • 01. Linear X Non-Linear • 02. Time-Invariant X Time-Variant • 03. Causal X Not-Causal • 04. Analog X Digital • 05. Discrete-Time X Continuous-Time • ETC. • So why “FILTERS” on (T & D) ??? Theme #01. Filters in Tracking

  3. Low-pass Filter #01. Linear Filter Terminologies White Noise Theme #01. Filters in Tracking  Noises mix together!! • Low-pass Filter • High-pass Filter • Band-pass Filter • Band-stop Filter( Notch Filter ) • Comb Filter • All-pass Filter( Only phase is modified ) Low-pass Filters are important in erasing “White Noises” ※ Even if the object is non-linear, we can approximate it into piecewise linear objects Think of Taylor’s Approximation!

  4. So, we usually use Low-pass Filter( LPF ) in order to cut high-frequency noises on the camera images, data, etc. But this still does not explain us why we use filters when we implement detection. Theme #01. Filters in Tracking

  5. Batch Filter & Recursive Filter • Batch System : Uses Empirical Method • Recursive System : Uses Deductive Method Theme #01. Filters in Tracking ※ I’ll explain this in more detail

  6. System Difference • You have a data of n people’s math grade • But you forgot to count the last person’s grade • In this situation, Batch System has to • Add up All Grades • respectively and divide the sum by n+1. • However, Recursive System only needs the • previous average value, last person’s grade and n • to derive present average value. • Needs at least n+1 memories in the system  If n is big, memories are being wasted!!! Theme #01. Filters in Tracking • Has better memory usage •  Only wastes at least 3 memories ( 3 terms are in the calculation!! )

  7. Let’s compare C codes • Recursive Filters are beneficial in memory •  As filter iteration goes higher, filter speed does not drastically fall • Also in the aspect of algorithm, Recursive Filter’s Computational Complexity is better than that of Batch Filter. •  We’ll check it fast right away now /** 평균 다시 구하기 **/ for (i=0;i<n+1;i++) { Tsum += vArray[i]; } Tsum += tval; Tmean = Tsum/(n+1); Theme #01. Filters in Tracking /** 평균 다시 구하기 **/ double alpha = 1-(1/n); Tmean = alpha*Tmean + (1-alpha)*tval; We Just Saw Computational Complexity is MUCH MORE ECONOMICAL in the Recursive Code!! “For” Statement in the Batch system increases the time complexity at least to its iteration number

  8. Why it is important to remember the last data?! Okay, then we know understand the importance of recursive filter in both aspects Memory Speed Theme #01. Filters in Tracking The most important reason is that we can let the filter system to STUDY ITSELF about the object’s motion • Recursive Filters have another important property • Since it is recursive, it uses the result value of the last iteration. • It means that it can remember the last data.

  9. ※ One example of LPF  Exponentially Weighted Moving Average Filter Theme #01. Filters in Tracking Firstly, this filter became the fundamental notion for the famous “Kalman Filter” Also, classified as 1st ordered Low-pass Filter and this is used in various types of studies, especially in Economics. You can recognize that this equation is similar to setting a division point between two points. This is the reason that we call this “Weighted Filter” We can determine whether to grant more importance to the previous data prediction value or to the present data measurement value

  10. Theme #01. Filters in Tracking • When α is close to 1 • Filter gets more sensitive to previous predicted value • Present predicted value becomes less noisy • When α is close to 0 • Filter gets more sensitive to present measured value • Present predicted value becomes more noisy Result : α value determines the prediction graph’s form

  11. Theme #02. Kalman Filter Measurement Value : z(k) Kalman Filter 5 Equations consisting the Kalman Filter Predicted Value : Xx(k)

  12. Theme #02. Kalman Filter

  13. <01> Measurement z(k) Theme #02. Kalman Filter • The “NOISY” input signals that the filter feeds in • Can be approximated into “Ground_Truth + N(0,σ)” • ( Signals that have standard deviation of σfrom ground truth ) • N(0,σ) is assumed to be the noise of this measurement value. • ( e.g. Noise is assumed to follow the normal distribution of mean value 0 )

  14. <02> Estimated Value Xx(k) Theme #02. Kalman Filter • The estimated value of one iteration of Kalman Filter • This value is feed-backed into the filter again • This is also called “Estimated Status Value” • On the first iteration, we usually set this value as the original state of the object unless there is no condition for starting point

  15. <03> Error Covariance P(k) Theme #02. Kalman Filter • It shows us how much the predicted value is accurate • ( e.g.  If P(k) is small, the prediction error is big ) • Diagonally symmetric matrix since P(k) talks about covariances • P(k)(ij) = P(k)(ji) K(k) = P_(k)*H’*inv( H*P_(k)*H’+R ) Xx(k) = Xx_(k) + K(k)*(z(k)-H*z_(k))

  16. u(k) is the Control Vector System Modeling x(k) = A*x(k-1) + B*u(k) + w(k) z(k) = H*x(k) + v(k) <04> System Models Theme #02. Kalman Filter • A : State Transition Model applied to the previous state x(k-1) • This determines the dynamic motion state of the object. • Therefore if the object moves in uniformly accelerated motion • of straight line, the matrix A would be different from that • which moves in uniform circular motion. • H : Observation Model which maps the true state space x(k) into the observed space z(k) • Q : Covariance Matrix for the process noise( State Variable Noise )  w(k) ~ N( 0, Q(k) ) • R : Covariance Matrix for the zero mean Gaussian white noise( Measurement Noise )  v(k) ~ N( 0, R(k) ) • B : Control-Input Model which considers external factors

  17. <The Steps of Kalman Filter> • Prediction Process • Filter predicts the state value and the error covariance • 2. Correction Process •  Filter draws “Kalman Gain” and calculates the estimated state value and estimated error covariance Theme #02. Kalman Filter

  18. Correction Process Theme #02. Kalman Filter • K(k) = P_(k)*H’*inv( H*P_(k)*H’+R )  Kalman Gain K(k) • Xx(k) = Xx_(k) + K(k)*( z(k)-H*x_(k) )  Estimation Xx(k) • P(k) = P_(k) – K(k)*H*P_(k)  Error Covariance P(k) What if H is I ( Identity Matrix ) ?? The equation becomes like below form Xx(k) = ( I-K(k)*H )*Xx_(k) + K(k)*z(k)  Xx(k) = ( I-K(k) )*Xx_(k) + K(k)*z(k) K(k) = P_(k)*H’*inv( H*P_(k)*H’+R )  KalmanGain The weight factor( Kalman Gain ) differs every iteration  This filter can balance the importance by itself!! These two equation are in same form Thus the method of solving estimation value is in actual, solving this exponentially weighted equation

  19. Prediction Process Theme #02. Kalman Filter • Xx_(k) = A*Xx(k-1) + B*u(k) • P_(k) = A*P(k-1)*A’ + Q Linear Modeling This process makes the prediction values for each variables and these are to be used in the correction process to yield estimation values. This is the singular iteration and we do it ‘k’ times.

  20. Ground Truth A =  Uniform Acceleration Linear Model Implemented with MATLAB Actual Implementation Theme #02. Kalman Filter • 01. Sine function tracking Nonlinear Estimated // Ground Truth Estimated // Ground Truth A = // Q = H = // R = 50* P = 25* // System Models Standard Deviation ( σ ): 0.1 Measured // Estimated Linear

  21. 감사합니다

More Related