330 likes | 464 Views
ASEN 5070: Statistical Orbit Determination I Fall 2013 Professor Brandon A. Jones Professor George H. Born Lecture 36: Smoothing and State Accuracy Estimation. Announcements. Homework 11 due on Friday Sample solutions posted online Lecture quiz due by 5pm on Wednesday
E N D
ASEN 5070: Statistical Orbit Determination I Fall 2013 Professor Brandon A. Jones Professor George H. Born Lecture 36: Smoothing and State Accuracy Estimation
Announcements • Homework 11 due on Friday • Sample solutions posted online • Lecture quiz due by 5pm on Wednesday • Final Exam Posted On Friday • Due December 16 by noon • By 11:59pm for CAETE Students • Final Project Due December 16 by noon • By 11:59pm for CAETE Students
Motivation • The batch processor provides an estimate based on a full span of data • When including process noise, we lose this equivalence between the batch and any of the sequential processors • Is there some way to update the estimated state using information gained from future observations?
Smoothing • Smoothing is a method by which a state estimate (and optionally, the covariance) may be constructed using observations before and after the epoch. • Step 1. Process all observations using a CKF with process noise (SNC, DMC, etc.). • Step 2. Start with the last observation processed and smooth back through the observations.
Notation • As presented in the book, the most common source of confusion for the smoothing algorithm is the notation Based on observations up to and including Value/vector/matrix Time of current estimate
Smoothing visualization • Process observations forward in time: • If you were to process them backward in time (given everything needed to do that):
Smoothing visualization • Process observations forward in time: • If you were to process them backward in time (given everything needed to do that):
Smoothing visualization • Smoothing does not actually combine them, but you can think about it in order to conceptualize what smoothing does. • Smoothing results in a much more consistent solution over time. And it results in an optimal estimate using all observations.
Smoothing • Caveats: • If you use process noise or some other way to increase the covariance, the result is that the optimal estimate at any time really only pays attention to observations nearby. • While this is good, it also means smoothing doesn’t always have a big effect. • Smoothing shouldn’t remove the white noise found on the signals. • It’s not a “cleaning” function, it’s a “use all the data for your estimate” function.
Smoothing of State Estimate • First, we use • If Q = 0,
Smoothing of State Estimate • Hence, in the CKF, we store:
Smoothing of Covariance • Optionally, we may smooth the state error covariance matrix
Smoothing • If we suppose that there is no process noise (Q=0), then the smoothing algorithm reduces to the CKF mapping relationships:
An example: 4-41 and 4-42 Book p. 283
An example: 4-41 and 4-42 Book p. 284
Smoothing • Say there are 100 observations • We want to construct new estimates using all data, i.e.,
Smoothing • Say there are 100 observations
Smoothing • Say there are 100 observations
Smoothing • Say there are 100 observations
Factors Influencing Filter Accuracy • Truncation error (linearization) • Round-off error (fixed precision arithmetic) • Mathematical model simplifications (dynamics and measurement model) • Errors in input parameters (e.g., J2) • Amount, type, and accuracy of tracking data
How do we characterize our accuracy? • For the Jason-2 / OSTM mission, the OD fits are quoted to have errors less than centimeter (in radial) • How do they get an approximation accuracy? • Residuals? • Depends on how much we trust the data • Provides information on fit to data, but solution accuracy? • Covariance Matrix? • How realistic is the output covariance matrix? • (Actually, I can make the output matrix whatever I want through process noise or other means.)
Preliminary Discussion – Batch Processor Covariance • Qualitatively, how does the mapped covariance look for the Batch processor?
Solution Characterization • Characterization requires a comparison to an independent solution • Different solution methods, models, etc. • Different observations data sets: • Global Navigation Satellite Systems (GNSS) (e.g., GPS) • Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS) • Satellite Laser Ranging (SLR) • Deep Space Network (DSN) • Delta-DOR • Others…
Compare to Independent Solution • Jason-2 / OSTM positions solutions generated by/at: • JPL – GPS only • GSFC – SLR, DORIS, and GPS • CNES – SLR, DORIS, and GPS • Algorithms/tools differ by team: • Different filters • Different dynamic/stochastic models
Comparison of Jason-2 / OSTM Solutions Image: Bertiger, et al., 2010 • 1 Cycle = approximately 10 days • Differences on the order of millimeters
Orbit Overlap Studies • Compare different fit intervals:
Orbit Overlap Studies • Consider the “abutment test”:
Example: Jason-2 / OSTM • Each data fit at JPL uses 30 hrs of data, centered at noon • This means that each data fit overlaps with the previous/next fit by six hours • Compare the solutions over the middle four hours • Why?
Example: Jason-2 / OSTM Image: Bertiger, et al., 2010 • Histogram of daily overlaps for almost one year • Imply solution consistency of ~1.7 mm • This an example of why it is called “precise orbit determination” instead of “accurate orbit determination”