1 / 17

Negative Power Law Noise, Reality vs Myth

Negative Power Law Noise, Reality vs Myth. Victor S. Reinhardt Raytheon Space and Airborne Systems El Segundo, CA, USA. Precise Time and Time Interval Systems and Applications Meeting 2009. Negative Power Law (Neg-p) Noise, Reality vs Myth. PSD L P (f)  f p for p < 0.

lahela
Download Presentation

Negative Power Law Noise, Reality vs Myth

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Negative Power Law Noise, Reality vs Myth Victor S. ReinhardtRaytheon Space and Airborne SystemsEl Segundo, CA, USA Precise Time and Time IntervalSystems and Applications Meeting 2009

  2. Negative Power Law (Neg-p) Noise,Reality vs Myth PSD LP(f)  f p for p < 0 • Not questioning the reality that -variances • Like Allan and Hadamard variances • Are convergent measures of neg-p noise • But will show it is myth that neg-p divergences in other variances like standard & N-sample • Are mathematical defects in these variances • That should be “fixed” by replacing them with -variances without further action • Will show each type of variance is a statistical answer to a different error question • And variance divergences are real indicators of physical problems that can’t be glossed over by swapping variances

  3. Negative Power Law Noise,Reality vs Myth • Will also show it is myth that one can properly separate polynomial deterministic behavior & (non-highpass filtered) neg-p noise • Using least squares & Kalman filters • Except under certain conditions • Will show the reality is that such neg-p noise is infinitely correlated & non-ergodic • Ensemble averages  time averages • And this makes neg-p noise act more like systematic error than conventional noise in statistical estimation

  4. Myth 1: Can “Fix” Variance Divergences Just by Swapping Variances Simple Statistical Estimation Model N Samples of Measured Data x(tn) = xc(tn) + xr(tn) xr(t) = Noise Generate xa,M(t)an M-Parameter Est of xc(t) xc(t) = True orDeterministic Behavior Data Collection Interval T t • Using a technique like a least squares or Kalman filter • Note x(t) here is any data variable • Not necessarily the time error

  5. Basic Error Measures and the Questions They Address ()2x(t) ()x(t)=x(t+)-x(t) ()x(t+) xj,M(t) xw,M(t) t T Accuracy xw.M(tn): Error of fit from true behavior? Data Precision xj,M(tn): Data variation from fit? Mth Order -Measures ()Mx(t): Stability over ? • Can form variances from these error measures • Point Variance: (t)2 = Ex(t)2 (Kalman) • E.. = Ensemble average Assumes Ex(t) = 0 • Average Variance: 2 = a weighted average of (t)2 over T (LSQF)

  6. Interpreting -Measures as Mth Order Stability Measures Interpolation Error Extrapolation Error     x(tm)    xj,M(tm)      xj,M(tM)   xa,M(t) Passes thru t0…tM-1but not tM xa,M(t) Passes thru t0…tMbut not tm • Definition of Mth Order Stability • Extrapolationor interpolation error xj,M(tm) at an (M+1)th point • After a removing a perfect M-parameter fit over only M of those points • Can show when xa,M(t) =(M-1)th order polynomial & pointsseparated by   xj,M(tm) ()Mx(t0) • Thus -variances are statistical measures of such stability

  7. A Derived Error Measure: The Fit Precision (Error Bars) xj,M(t) 2wj,M(t) or 2wj,M xw,M(t) t • Fit Precision: A statistical estimate of accuracy based on measured data precision variance and correction factors based on a specifictheoretical noise model wj,M(t)2 = d(t) j,M(t)2 wj,M2 = d j,M2 • For uncorrelated noise & an unweighted linear least squares fit (LSQF)  d = M/(N-M) • Not true for correlated noise

  8. The Neg-p Convergence Properties of Variances for Polynomial xa,M(t) K(f) for Data Precision PSD SystemResponse Variance Kernel M=1 f 2 M=2 f 4 dB M=3f 6 M=4  f 8 M=5 f 10 Log(fT) 1 • For -variances K(f) f2M(|f|<<1) • Also true for Mth orderdata precision • Both converge for 2M  -p • But for accuracy K(f)is a lowpass filter • Accuracy won’t converge unless |Hs(f)|2provides sufficient HP filtering • Thus the temptation to “fix” a neg-p variance divergence by swapping variances

  9. This Is No “Fix” Because Each Variance Addresses a Different Error Question Accuracy: Error of fit from true behavior? Data Precision: Data variation from fit? Fit Precision: Est of accuracy from data/model? Stability: Extrap/Interp error over ? ,M(t) j,M(t) 2wj,M(t) w,M(t) t • Arbitrarily swapping variances just misleadingly changes the question • Does not remove the divergence problem for the original question

  10. Then What Does a Neg-p Variance Divergence Mean? x(t) = f -2 Ensemble Members Start Noise Here xj,1 2w,1 2j,1 t0 T • Accuracy variance   as t0  while precision remainsmisleadingly finite • Thus an accuracy variance infinity is a real indicator of a severe modeling problem • To truly fix  Must modify the system design or the error question being asked • Note for f -3 noise j,1  but j,2 remains finite • 1/f noise is a marginal case w,M2 ln(fht0) • 1/f contribution can be small even for t0 = age of universe  as t0 

  11. Myth 2: Can Use Least Squares & Kalman Filters to Properly Separate • True polynomial behavior from data containing (non-HP filtered) neg-p noise • Will show that such noise acts like systematic error in statistical estimation • Which generates anomalous fit results • Except under certain conditions • Will show this occurs • Because (non-HP filtered) neg-p noise is infinitely correlated & non-ergodic

  12. xp(t) starts at finite time Can have Rp  as tg  Wigner-Ville function Is a tg dependent PSD Two important NS WSS Theorems xp(t) active for all t xp(t) must have a bounded steady state PSD  NS & WSS Pictures of a Random Process xp(t) NS Picture WSS Picture xp(t) = 0 t<0 xp(t)  0 all t 0 tg   (Complex)Fourier Transform  = Time Difference tg= Time from xp start E = Ensemble Average

  13. The Properties of (Non-Highpass Filtered) Neg-p Noise Neg-p Noise t=0 tg  Unbounded as tg • Rp(tg,) is finite for finite tg • Rp(tg,)   as tg  for all  • WSS Rp() is infinite for all  • Can define Lp(f)without Rp() • Its correlation time cis infinite • It is inherently non-ergodic • Theorem (Parzan):A NS random processis ergodicif and only if E..  <..>T Rp(tg,) <  as tg  and c < 

  14. Ergodicity and Practical Fitting Behavior Ensemble Av E Time Av <..>T  T  • Fitting Theory is Based on E • Practical fits rely on <..>T • So noise must be ergodic-likeover T <..>TE for a practical fitto work as theoretically predicted • Even if noise is strictly ergodic<..>T  = E • May not have <..>TEfor the T in question • Most theory assumes<..>any T =Efor N   • Called local ergodicityfor T0 • For correlated noise T >> c is also requiredfor ergodic-like behavior (intermediate ergodicity)

  15. Practical Examples of Neg-p Non-ergodicity Fitting Effects f -2 Noise LSQF Kalman Filter f -2 Noise StandardKalman AugmentedKalman f 0 Noise f 0 Noise f -3 Noise –x --xc xa,M± wj,M • A single neg-p ensemble memberhas polynomial-like behavior over T • xp(t)is systematicwithpolynomial-like xc(t) • Even an augmented Kalman filter can only separate non-systematic parts of xp(t) & xc(t) • Fitting methodologiescan only separatelinearly independent variables • Cannot truly separate (non-HP filtered) neg-p noise and poly-like deterministic behavior • Noise whitening cannot be used in such cases long term error-1.xls

  16. Non-Ergodic-Like Behavior in Correlated WSS Processes T/c=200 T/c=20 T/c=2 T/c=2000 Kalman filter output Meas data Fit precision --True behavior T • T/c = Ni Number of independent samples • Must have Ni >> 1 for fit to be meaningful • For non-HP filtered neg-p noise c =  • No T will produce ergodic-like fit behavior

  17. Conclusions • Arbitrarily swapping variances does not really fix a neg-p divergence problem • Such divergences indicate real problems that must be physically addressed • Non-HP filtered neg-p noise acts like more like systematic error than conventional noise • When fitting to (full order) polynomials • The surest way to reduce such problems is to develop freq standards with lower neg-p noise • Good news for frequency standards developers • See: http://www.ttcla.org/vsreinhardt/ for preprints & other related material

More Related