1 / 13

Empirical Orthogonal Functions

Empirical Orthogonal Functions. Andy Jacobson and Brad Holcombe July 2006. Variance and Covariance. Notes: E() is expectation (mean) M is the number of obs N is the number of stations (locations with data) All vectors are arranged in columns unless transposed.

grady
Download Presentation

Empirical Orthogonal Functions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Empirical Orthogonal Functions Andy Jacobson and Brad Holcombe July 2006

  2. Variance and Covariance • Notes: • E() is expectation (mean) • M is the number of obs • N is the number of stations (locations with data) • All vectors are arranged in columns unless transposed. • If computing var/cov “by hand”, you must remove the mean of the data at each station. • Degrees of freedom M are reduced by one because of the computation of the mean. • D is the “data matrix” • The spatial dimensions of your input data must be “unwrapped”: 2-D grids must be laid out as a 1-D row in D • C is the covariance matrix.

  3. Eigenvalue Decomposition • Two covariate time series, x and y. • Generated from two uncorrelated random number sequences by multiplying by a specified covariance matrix.

  4. Eigenvalue Decomposition • Notes: • C is the covariance matrix • E is the matrix of eigenvectors •  is the diagonal matrix of eigenvalues

  5. Eigenvalue Decomposition • Notes: • C is the covariance matrix • E is the matrix of eigenvectors •  is the diagonal matrix of eigenvalues

  6. Eigenvalue Decomposition • Notes: • C is the covariance matrix • E is the matrix of eigenvectors •  is the diagonal matrix of eigenvalues

  7. Eigenvalue Decomposition • Notes: • C is the covariance matrix • E is the matrix of eigenvectors •  is the diagonal matrix of eigenvalues

  8. Eigenvalue Decomposition

  9. Eigenvalue Decomposition • Notes: • C is the covariance matrix • E is the matrix of eigenvectors •  is the diagonal matrix of eigenvalues

  10. Eigenvalue Decomposition • Notes: • C is the covariance matrix • E is the matrix of eigenvectors •  is the diagonal matrix of eigenvalues

  11. Eigenvalue Decomposition • Notes: • C is the covariance matrix • E is the matrix of eigenvectors •  is the diagonal matrix of eigenvalues

  12. EOFs • Notes: • EOF terminology is not well defined. Conflicting definitions are common in the literature. • The projection of each eigenvector onto the data gives a time series of “scores”. • A is the matrix of score time series and has dimensions M x N • “Principal components” often refers to the scores. • “PCA”, however, is sometimes taken to mean an eigen decomposition of the correlation matrix. • The observations from any given time can be recovered with a weighted sum of the eigenvector scores from that time (row)

  13. EOF Examples Retrieving the magnitude and time series of two static patterns Effects of noise, missing data, and few data Interpretation Propagating features NAO example Exercise: ENSO

More Related