1 / 34

The Profound Impact of Negative Power Law Noise on the Estimation of Causal Behavior

The Profound Impact of Negative Power Law Noise on the Estimation of Causal Behavior. Victor S. Reinhardt Raytheon Space and Airborne Systems El Segundo, CA, USA. 2009 Joint Meeting of the European Frequency and Time Forum and the IEEE International Frequency Control Symposium, Besançon, France.

levia
Download Presentation

The Profound Impact of Negative Power Law Noise on the Estimation of Causal Behavior

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Profound Impact of Negative Power Law Noise on the Estimation of Causal Behavior Victor S. ReinhardtRaytheon Space and Airborne SystemsEl Segundo, CA, USA 2009 Joint Meeting of the European Frequency and Time Forum and the IEEE International Frequency Control Symposium, Besançon, France

  2. Introduction • Well-known that correlated or systematic noise • Cannot be properly separated from true causal behavior imbedded in noisy data • By any fitting or estimation technique • i.e., Least Squares Fit (LSQF) or Kalman filter • But the profound implications of this • In dealing with highly correlated negative power law (neg-p) noise • Have not been fully appreciated • Neg-p  PSDLp(f)  |f|p for p < 0 • This paper will investigate these implications

  3. Introduction • Paper will 1st show that such neg-p correlations • Lead such fitting techniques • To generate anomalous fit solutions • And faulty estimates of the true fit errors • Then it will explore profound consequences of this in a variety of areas

  4. Model We Will Use for Estimating Causal Behavior in Noisy Data x(t) x(tn) t Over Data Observation Interval T = Nts (tn=nts) • x(t) = Any data variable representing a measurable output from an instrument • N Samples of Data x(tn)  Can also be fn of (t) butx(t,(t))  x(t)for most of paper

  5. Model We Will Use for Estimating Causal Behavior in Noisy Data • For example x(t) = xosc(t) – xosc(t-)isobservableoutput of 2-way ranging Rx • Where xosc(t) is theunobservable true time error of the reference oscillator

  6. Model We Will Use for Estimating Causal Behavior in Noisy Data x(t) (True) Noise xr(t) x(t) (True) Causal Behavior xc(t) t Over Data Observation Interval T = Nts (tn=nts) • x(t) is sum of (true) causal behaviorxc(t) • plus (true) noise xr(t) =xc(t) + xr(t)

  7. Model We Will Use for Estimating Causal Behavior in Noisy Data x(t) (True) Noise xr(t) x(t) (True) Causal Behavior xc(t) xa,M(t) = M-Parameter Estimate ofxc(t) t Over Data Observation Interval T = Nts (tn=nts) • Fitting process generates xa,M(t) an M-Parameter estimate of xc(t) =xc(t) + xr(t)

  8. The Accuracy of the Causal Estimate is xw,M(tn) x(t) xw,M(tn) = xa,M(tn) – xc(tn) t x(tn) Data interval T xc(t) xa,M(tn) • Point Variance of xw,M(tn) (Kalman) (Exr = 0) • w,M(tn)2 = Exw,M(tn)2E = Ensemble average • Weighted Average Variance over T (LSQF) • w,M2 = n n w,M(tn)2 n = data weighting xw,M(t)  only detailed measure of the true accuracy

  9. A Major Thesis of This Paper is • Cannot substitute other error measures for w,M(tn)2 or w,M2 • Just because they diverge due to neg-p noise • And that such divergences are in fact indicators of a real inaccuracy problem with the fit solution

  10. But the Accuracy xw,M is Not an Observable Error Measure x(t) Only true observable is xj,M(tn) = x(tn) – xa,M(tn)the Data Precision xw,M t x(tn) Data interval T xc(t) xa,M(tn) j,M(tn)2 = Exj,M(tn)2j,M2 = n n j,M(tn)2

  11. But the Accuracy xw,M is Not an Observable Error Measure x(t) xw,M t x(tn) Data interval T xc(t) xa,M(tn) From Data Precision can form Fit Precisionwj,M(tn)2 = d(tn)j,M(tn)2wj,M2 = dj,M2 2wj,M(tn) xj,M(tn) Estimates accuracybased ontheoreticald(tn)&d

  12. But the Accuracy xw,M is Not an Observable Error Measure • Can showfor uniform LSQF & uncorrelated xr(tn)   do = M/(N – M) • Will show do & equiv do(tn) can generatevery misleading accuracy estimateswhen neg-p noise is present

  13. Mth order -Measures over Interval  x(t) 2()x(tn) ()x(tn) ()x(tn+) t x(tn) Data interval T xc(t) xa,M(tn) • ,2(tn)2  Allan variance of time error • ,3(tn)2  Hadamard variance of time error • Cannot use -measures for accuracy just because accuracy diverges Mth Order -Measures: ()Mx(tn) ,M(tn)2 = E[()M(tn)]2,M2 = n n ,M(tn)2 Can be shown to be measures of stability and precision under specific fitting conditions

  14. Neg-p Noise xp(t) NS Picture ofNeg-p Noisexp(t) = 0 [t<0] xp(t) t 0 • xp(t) generally represented as wide-sense stationary (WSS) random process • But xp(t) is inherentlynon-stationary (NS) • NS picture: xp(t) starts atfinite t • WSS picture: xp(t) muststart at t = - to maintaintime invariance • So WSS xp(t)   for all t since neg-p noise grows without bound as time from start  

  15. Neg-p Noise xp(t) Neg-p Noisexp(t) = 0 [t<0] xp(t)  t tg • Only NS covariance or correlation function(Exp = 0) is finite •  Difference time betweencovariant arguments • tg Time from start of process • Rp(tg,)finite for finite tg &  • Rp(tg,)   as tg  for all  • Because of this theWSS covariance Rp() is infinite for all 

  16. Neg-p Noise xp(t) FourierTransf • In NS f-domain are 3 major spectral functions Wigner-Ville Function Loève Spectrum Ambiguity function • Can define Lp(f)without using Rp()aslimit of Wp(tg,f)

  17. xp(t) Not the Same as xr(t)Because of System Filtering Hs(f) (2nd Order) PLL Delay Mismatch v v Tx Rx Tx Rx x xTx(t) xRx(t) xTx(t) x d = x-v PLL ~ ~ ~ Hs(f)  f4 |f|<<1/Loop BW Hs(f) f2 |f|<<1/d In Both Synchronous& AsynchronousTiming Systems In Asynchronous TimingSystems • xr(t) = hs(t)  xp(t)  Xr(f) = Hs(f) Xp(f) • Important later because Hs(f) can have HP filtering as well as LP filtering properties • Typical HP filtering Hs(f) for time or phase error

  18. The Effect of Neg-p Noise on LSQF & Kalman Estimation LSQF Kalman Filter f -2 Noise WhiteNoise Model f 0 Noise f 0 Noise f -3 Noise –x --xc xa,M± wj,M f -2 Noise Correlated Noise Model • What is happening here is that the neg-p noise ensemble memberis mimicking the signature ofthe causal behavior over T • So part of the noise cannot be separated from the causal behavior long term error-1.xls

  19. The Effect of Neg-p Noise on LSQF & Kalman Estimation LSQF Kalman Filter f -2 Noise WhiteNoise Model f 0 Noise f 0 Noise f -3 Noise –x --xc xa,M± wj,M f -2 Noise Correlated Noise Model • Variables that are linearlydependent over T cannot beseparated by any solution process • Solution matrix has azero determinant • Adding correlated xr(t) models when xr(t) and xc(t) are correlated with each otherwill just generate ill-formed equations long term error-1.xls

  20. Ergodicity & Proper Behavior of a Practical Realization of a Fit Ex(t) x(3)(t) x(2)(t) x(1)(t) T <x(n)(t)>T • Fitting theory based on ensemble means E • But practical realizations must use <..>Tfinite time meanover single ensemble member • Noise must be ergodic-like over Tfor practical realization to work as expected  <..>TE • Strict ergodicity <..>T  = E Ergodic- like<..>TE ..

  21. Ergodicity & Proper Behavior of a Practical Realization of a Fit c  T Ex(t)  <x(t)>T x(3)(t) E x(n)(t) x(2)(t) x(1)(t) <x(n)(t)>T • Strong connection betweencorrelation timec of noise& ergodic-like behavior • A white process islocally ergodic <..>T  0= E • But a correlated processis only intermediate ergodic<..>  E • So must have T >> c for a practical realization to work as expected when a noise process is correlated T >> c

  22. Connection Between T, c & Anomalous Fitting Behavior T/c=2000 T/c=200 T/c=20 T/c=2 Correlated Noise with Finite c T –x --xc xa,M± wj,M • But for neg-p noise can show that c =  • Unless HP filter Hs(f) suppresses neg-p pole • Neg-p noise will generate anomalous fit behavior for all T • Unless Hs(f) makes c finite for xr(t)

  23. Calculating dfor Neg-p Noisewj,M2 = dj,M2Estimate of w,M2  = w or j • Theoretically d = w,M2/j,M2 • Have shown previously that • K,M(f) = spectral kernel representing fit • Model error adds extra term to ,M2 • Model error occurs when xa,M(t) is not complex enough to track xc(t) over T • Can use above to calculate dwhen p-orderof noise is known

  24. Calculating dfor Neg-p Noisewj,M2 = dj,M2Estimate of w,M2 Kj,M(f) (Uniform LSQF) M=1 f 2 M=2 f 4 fT = M/(2T) dB M=3f 6 M=4  f 8 M=5 f 10 1 Log10(fT) N=1000 • For xa,M(t) = (M-1)thorder polynomial • Kj,M(f)  2Mth orderhighpass filter • Kw,M(f)  2Mth orderlowpass filter • Can use above K,M(f)properties tographicallyexplain the behaviorof d for neg-p noise

  25. Graphing w,M2 & j,M2 White Noise w,M2 j,M2 L0(f) 0 fT fh Kw,M(f) Kj,M(f) • Can showfor white noise(uniform LSQF) unlessan ideal NyquistLP Hs(f) is used • Well-known effect of over-sampling for BW d = fT/(fh-fT)  M/(N-M) • K,M(f)  Sharp cut-offs at fT • Hs(f)  LP cutoff fh

  26. Graphing w,M2 & j,M2  Neg-p Noise Lp(f) 0 fT fh • For neg-p noise w,M2 & d  • While w,M2 is finite • For all neg-p (when Hs(f) has no HP properties) • Will later show this is an indication of a real problem with the fit accuracy by using the NS picture of neg-p noise

  27. Effect of HP filtering Hs(f) for Neg-p Noise fT >> fl Hs(f) suppresses Lp(f) pole Lp(f) dB 2w,M< 2j,M f dB  fT fh fl fT << fl Lp(f) dB f dB  fl fh fT • When Hs(f) suppressesneg-p pole at f = 0  d finite • When this is true xr(t)has finite cis WSS & is intermediate ergodic • Practical fit behavior OK for fT << fl T >> c  1/(2fl) d >do for fT >> fl d <do for fT << fl Hs(f)  HP knee fl & LP knee fh

  28. Explaining Physical Reality of Infinite WSS w,M2withNS Picture Ensemble Members xr(t0) StartofData StartofNoise T t0 xj,M LSQF with xa,1 = a0 x = xr(xc = 0) • xr(t0) excluded from xj,M(tn) because xr(t0) is treated as part of causal behavior by fit • w,M2  as t0 with no indication from j,M2 • Thus the WSS w,M2 infinity indicates that the physical w,M2 will truly become very large whent0 >> T (the usual situation) • For example a PLL will cycle slip when w,M2  for the calculated loop phase error

  29. A Simulation Misrepresents Physical Situation when t0 not >> c Ensemble Members xr(t0) StartofData t0 = 0 StartofNoise Start ofNoise& Data T t0 T xj,M LSQF with xa,1 = a0 x = xr(xc = 0) • Because steady state not reached • Note in above that w,M  j,M when t0 = 0 while w,M >> j,M in physical situation when t0 >> T • Thus all neg-p noise simulations are inherently misleading unless Hs(f) makes c finite

  30. ,M(t) as Stability and Precision Measures When xa,M(t) = (M-1)th Order Polynomial Extrapolation Interpolation x(tm) x(tM) xj,M(tm)   º xj,M(tM) x(tm) º xj,M(tm)    x(t0)            xa,M(t,a) Unweighted LSQFover all t0…tM    xa,M(t) Passes thrut0…tM-1 xk,M(t,a) Passes thrut0…tM but not tm j,M = ,M(t0) for “Unbiased” n j,M(t0)  ,M(t0) Stability over  = T/M Precision • Removing “causal” behavior from data biases ,M(t) as a true random instability measure • ,M(t) generated from xr(t) with no fit not the same as ,M(t) generated from x(t) with removal of xa,M(t) • Fit removes noise correlated with causal behavior • Reduces true noise contribution for   T

  31. Secondary Accuracy after Cal-ibration Can Sidestep Divergence x(k)(t) after Cal (tn’) = o Cal Cal Start of Noise x(k)(t) = x(t) – xa,M’(k)(t) • x(t)  x(t,(t)) • (t) = other independentvariables • Cal x(t) periodically at tn’against primary standardwhen (tn’) = o • To generate cal function xa,M’(k)(t) • Now use secondary (calibrated) datax(k)(t) = x(t) – xa,M’(k)(t) • To determine  sensitivity coefficients • & sidestep xr(t) divergence for-coefficient determination • But divergence still present for determining true causal behavior free of noise

  32. Conclusions & Further Implications • Cannot separate neg-p noise from true causal behavior for any T Exception for HP Hs(f) • Fits generate anomalous results for any Tunless sufficiently HP filtered by Hs(f) • Unbiased or pure neg-p behavior unobservable in data containing causal behavior • x(t) with xa,M(t) removed not equivalent to xr(t) • Can explain downward biases in Allan & Hadamard variances when   T

  33. Conclusions & Further Implications • Noise whitening is an improper procedure to use when neg-p noise present • Fits to correlated noise as well as xc(t) • Must have T >> c for fitting technique to behave as theoretically expected • Problematic for neg-p noise  unless HP Hs(f)

  34. Conclusions & Further Implications http://www.ttcla.org/vsreinhardt/ • Also effects M-corner hat & cross-correlation techniques • Assume statistical independence implies<..>T cross-correlations  0 as N-1/2 • When there is no HP filtering Hs(f)expect such anomalies for all T & all neg-p orders • For cross-correlation with 1st order PLL Hs(f) • Expect anomalies for f -3 noise that grow as slowly as log(T) • The above needs further study

More Related