300 likes | 399 Views
ASEN 5070: Statistical Orbit Determination I Fall 2013 Professor Brandon A. Jones Professor George H. Born Lecture 5: Linear Algebra and Linearization. Announcements. Homework 0 & 1 were due t oday Homework 2 Posted – Due September 13 Quiz for Lectures 1-5 active at noon
E N D
ASEN 5070: Statistical Orbit Determination I Fall 2013 Professor Brandon A. Jones Professor George H. Born Lecture 5: Linear Algebra and Linearization
Announcements • Homework 0 & 1 were due today • Homework 2 Posted – Due September 13 • Quiz for Lectures 1-5 active at noon • Due by Monday’s Lecture at 11am • Five Questions (10-15 minutes) • Expect to have one every couple of lectures
Today’s Lecture • Linear Algebra (Appendix B) • Linearization
Matrix Basics • Matrix A is comprised of elements ai,j • The matrix transpose swaps the indices
Matrix Basics • Matrix inverse A-1 is the matrix such that • For the inverse to exist, A must be square • We will treat vectors as n×1 matrices
Matrix Determinant • The square matrix determinant, |A|, describes if a solution to a linear system exists: • It also describes the change in area/volume/etc. due to a linear operation:
Linear Independence • A set of vectors are linearly independent if none of them can be expressed as a linear combination of other vectors in the set • In other words, no scalars αi exist such that for some vector vj in the set {vi}, i=1,…,n,
Matrix Rank • The matrix column rank is the number of linearly independent columns of a matrix • The matrix row rank is the number of linearly independent rows of a matrix • rank(A) = min( col. rank of A, row rank of A)
Exercise • What is the rank of the following matrices:
Vector Differentiation • When differentiating a scalar function w.r.t. a vector:
Vector Differentiation • When differentiating a function with vector output w.r.t. a vector:
Matrix Derivative Identities • If A and B are n×1 vectors that are functions of X:
Positive Definite Matrices • The n×n matrix A is positive definite if and only if: • The n×n matrix A is positive semi-definite if and only if:
Minimum of a function • The point x is a minimum if and is positive definite.
Eigenvalues/vectors • Given the n×n matrix A, there are n eigenvalues λ and vectors X≠0 where
Book Appendix B • Other identities/definitions in Appendix B of the book • Matrix Trace • Maximum/Minimum Properties • Matrix Inversion Theorems • Review the appendix and make sure you understand the material
Estimated State Vector • We want to get the best estimate of X possible • Ex. force model parameters: CD, CR, J2, etc. • Ex. measurement params: station coordinates, observation biases, etc.
Orbit State Vector Dynamics • “Solve-for” parameters are usually constant (but not always…) • More generally:
Observation Vector • Example measurement types: • Range, Range-Rate • Right Ascension/Declination • GPS pseudorange and carrier phase • Star tracker and angular rate gyro • At each epoch ti we have a measurement model G(Xi, ti) • εirepresents the model error in G(Xi, ti) • May result from statistical uncertainty • Could be a result of modeling error • What are some examples of modeling error?
General Estimation Problem • How do we estimate X ? • How do we estimate the errors εi? • How do we account for force and observation model errors?
Why not use the NR estimator? • It works for HW 1, why don’t we do it in practice? • Assumed the same number of observations as unknowns • What about when we have more observations than unknowns? • Did not rigorously account for observation errors • How do we account for statistical uncertainties?
# Unknowns vs. # Measurements • Known: p×lobservations • Unknowns: • n×lunknown state variables • p×lunknown observation errors • (n+p)×l total unknown values • We have more unknowns that observations, what do we do now? • X(t) is a function of X(t0) • Well, now we are down to n+(p×l) unknowns…
Cost Function J(x) • We introduce a “cost function” that we seek to minimize • We now select X to minimize J(X) • No longer estimating εi! • This gives us n+p×lequations and only n unknowns • This is known as: Least Squares Estimation