380 likes | 482 Views
A description of the tangent linear normal mode constraint (TLNMC) in GSI. David Parrish.
E N D
A description of the tangent linear normal mode constraint (TLNMC) in GSI David Parrish
I first want to give thanks to the Central Weather Bureau and the National Central University for inviting us. It has been especially wonderful to have our jet lag cured with the extra time you allowed for us. I speak for Daryl and myself in thanking Wan Shu and her sister Jean for their hospitality and graciousness as they gave us a most excellent tour of Southern Taiwan and provided the medicine for a speedy cure from the jet lag. Second, I want to thank Daryl for providing a copy of his PhD qualifying exam presentation, which I have made liberal use of. I invite him (and others) to correct me when I make incorrect statements. CWB
Improving global variational data assimilation at NCEP AOSC PhD Qualifying Exam Seminar 18 January 2011 Daryl T. Kleist Adviser: Dr. Kayo Ide Acknowledgements: Dr. John Derber and Dr. David Parrish
In this talk, my goal is for everyone here to have some understanding of the following, which is the abstract representation of TLNMC, short for Tangent Linear Normal Mode Constraint and why it is useful in the GSI system and also to mention its limitations. CWB
Tangent Linear Normal Mode Constraint • analysis state vector after incremental NMI • C = Correction from incremental normal mode initialization (NMI) • represents correction to analysis increment that filters out the unwanted projection onto fast modes • No change necessary for B in this formulation • Based on: • Temperton, C., 1989: “Implicit Normal Mode Initialization for Spectral Models”, MWR, vol 117, 436-451. • Similar idea developed and pursued independently by Fillion et al. (2007)
Before getting to the previous expression, I will try to get us to the basic cost function of variational data assimilation: CWB
Variational Data Assimilation J : Penalty (Fit to background + Fit to observations + Constraints) x’ : Analysis increment (xa – xb) ; where xb is a background/first guess B : Background error covariance H : Observations (forward) operator R : Observation error covariance (Instrument + representativeness) yo’ : Observation innovations Jc : Constraints (physical quantities, balance/noise, etc.)
For a long time now, cost functions like this are always the first thing and the main thing that is shown in data assimilation papers and lectures. But many people, especially those outside the data assimilation community, I think are immediately unable to follow what we are talking about , partly because the notation is abstract and difficult to understand. CWB
To give some meaning to this, I like to use zero dimensional examples. The example I will use here comes from our last weekend adventures. It was quite warm in the south of Taiwan. Speaking for myself being from the U. S., my mental reference for experiencing the air temperature is in the Fahrenheit scale, where feeling hot for me right now is when the temperature is above 86 F. But for most of the world, including here, feeling hot is when the temperature is above (5/9)*(86 – 32) = 30 C. CWB
Yesterday, we were warm and sweaty as we left the beautiful Kenting National Park. When we got off of the train in Taipei, it felt like winter had arrived. But my internal observation of temperature of 59 F didn’t match with the sign on a tower that said in bold red 20 C. So I have to do a mental calculation, which isn’t always accurate. CWB
So let us call my perceived temperature estimate when arriving in Taipei x_b = 59 F (the bold type here and following is for vectors and matrices, which here are all of dimension = 1, and in space/time of dimension 0) It is not that cold, but my perception is not very accurate, with an estimated standard deviation error of say sqrt(B) = 5 F and no bias (I say I am unbiased for simplicity, but that is certainly not correct—and this is true for NWP models also!) CWB
The observed temperature in Taipei is y_o = 20 C and the measurement error (also unbiased for simplicity) is sqrt(R) = 1 C CWB
The simulated observation based on my perception x_bis y_b = H(x_b) = (5/9) ( x_b – 32 ) H(x) is the called the full forward operator to derive a simulated observation from the model (in this case, my perception of temperature) and in general is a nonlinear function of x. CWB
The analysis increment is x’ = x_a – x_b where x_a is the correction to my perception x_b based on the Taipei observation y_o The observation innovation is y_o’ = y_o – H(x_b) The “tangent linear” of H(x_b) is H = (5/9) CWB
So now we have identified all of the components of the basic cost function (except J_c which will be discussed briefly later): CWB
Variational Data Assimilation J : Penalty (Fit to background + Fit to observations + (Constraints) ) x’ : Analysis increment (xa– xb) ; where xb is a background/first guess (my estimate of T in deg F, and xais the improvement on my perception by the Taipei data yo ) B : Background error covariance (error variance (square of std deviation error of my perception = 52 = 25 F2 ) H : Observations (forward) operator (tangent linear = 5/9 in this case) R : Observation error covariance (Instrument + representativeness) (here just my ges of 1 C2 for the tower display) yo’ : Observation innovations (yo - H( xb ) ) Jc : Constraints (physical quantities, balance/noise, etc.)
CWB Because all vectors and matrices are now just 1 element, we can trivially solve for the analysis increment. We first take the derivative of J with respect to x’ and set to zero: 0 = B-1 x’ – HT R-1 ( yo’ – Hx’ ) or xa = xb + (B-1+ HT R-1 H)-1 HT R-1(yo–H(xb))
CWB Because in this example all quantities are really just scalars, we can rearrange however we want and get rid of the transpose. Of course for real cases we work with, only certain orders are valid. One that I believe is the basis for the EnKF is xa = xb + B HT (H BHT + R)-1(yo–H(xb))
Tangent Linear Normal Mode Constraint • analysis state vector after incremental NMI • C = Correction from incremental normal mode initialization (NMI) • represents correction to analysis increment that filters out the unwanted projection onto fast modes • No change necessary for B in this formulation • Based on: • Temperton, C., 1989: “Implicit Normal Mode Initialization for Spectral Models”, MWR, vol 117, 436-451. • Similar idea developed and pursued independently by Fillion et al. (2007)
Variational Data Assimilation J : Penalty (Fit to background + Fit to observations + Constraints) x’ : Analysis increment (xa – xb) ; where xb is a background/first guess B : Background error covariance H : Observations (forward) operator R : Observation error covariance (Instrument + representativeness) yo’ : Observation innovations Jc : Constraints (physical quantities, balance/noise, etc.)
“Strong” Constraint Procedure C=[I-DFT]x’ • Practical Considerations: • C is operating on x’ only, and is the tangent linear of NNMI operator • Only need one iteration in practice for good results • Adjoint of each procedure needed as part of minimization/variational procedure T n x n F m x n D n x m Dry, adiabatic tendency model Projection onto m gravity modes m-2d shallow water problems Correction matrix to reduce gravity mode Tendencies Spherical harmonics used for period cutoff
Tangent Linear Normal Mode Constraint • Performs correction to increment to reduce gravity mode tendencies • Applied during minimization to increment, not as post-processing of analysis fields • Little impact on speed of minimization algorithm • CBCT becomes effective background error covariances for balanced increment • Not necessary to change variable definition/B (unless desired) • Adds implicit flow dependence • Requires time tendencies of increment • Implemented dry, adiabatic, generalized coordinate tendency model (TL and AD)
Surface Pressure Tendency Revisited Minimal increase with TLNMC Zonal-average surface pressure tendency for background (green), unconstrained GSI analysis (red), and GSI analysis with TLNMC (purple)
Vertical Modes • Gravity Wave Phase Speed: • 1 (313.89 ms-1) • 2 (232.91 ms-1) • 3 (165.45 ms-1) • 4 (120.07 ms-1) • 5 (91.19ms-1) • Global mean temperature and pressure for each level used as reference • First 8 vertical modes are used in deriving incremental correction in global implementation
Single observation test (T observation) • Magnitude of TLNMC correction is small • TLNMC adds flow dependence even when using same isotropic B Isotropic response Flow dependence added 500 hPa temperature increment (right) and analysis difference (left, along with background geopotential height) valid at 12Z 09 October 2007 for a single 500 hPa temperature observation (1K O-F and observation error)
Single observation test (T observation) U wind Ageostrophic U wind From multivariate B Smaller ageostrophic component TLNMC corrects Cross section of zonal wind increment (and analysis difference) valid at 12Z 09 October 2007 for a single 500 hPa temperature observation (1K O-F and observation error)
Analysis Difference and Background 500 hPa zonal wind analysis difference (TLNMC-No Constraint ; left) after assimilating all observationsand zonal wind background (right) valid at 12Z 09 October 2007
Little Impact on Minimization Norm of gradient (left) and total penalty (right) for each iteration for analysis at 12Z 09 October 2007 [Jump at iteration 100 from outer loop update] No Constraint (orange) versus TLNMC (green)
Surface Pressure Tendency Revisited Minimal increase with TLNMC Zonal-average surface pressure tendency for background (green), unconstrained GSI analysis (red), and GSI analysis with TLNMC (purple)
Vertical Velocity Difference Zonal Mean Difference of the RMS (TLNMC-No Constraint; Pa s-1) of the derived vertical velocity increment for the analysis valid at 12Z 09 October 2007. Negative values are shaded blue and positive values shaded red.
“Balance”/Noise Diagnostic • Compute RMS sum of incremental tendencies in spectral space (for vertical modes kept in TLNMC) for final analysis increment • Unfiltered: Suf (all)and Suf_g (projected onto gravity modes) • Filtered: Sf (all)and Sf_g (projected onto gravity modes) • Normalized Ratio: • Rf = Sf_g / (Sf - Sf_g) • Ruf = Suf_g / (Suf - Suf_g)
Impact of TLNMC on 500 hPa AC Scores No Constraint (control, black) versus TLNMC (red) 500 hPa Geo. Height AC Scores for period 01 Dec. 2006 to 14 Jan. 2007
Precipitation Verification Precipitation Equitable Threat and Bias Scores for period from 01 Dec. 2006 to 12 Jan. 2007 (No Constraint-Black ; TLNMC-Red)
Tropical Wind Vector RMS Error No Constraint (control, black) versus TLNMC (red) 200 hPa & 850 hPa Tropical Vector Wind RMS Error for period 01 Dec. 2006 to 14 Jan. 2007
TLNMC Summary • A scale-selective dynamic constraint has been developed based upon the ideas of NNMI • Successful implementation of TLNMC into global version of GSI at NCEP and GMAO • Incremental: does not force analysis (much) away from the observations compared to an unconstrained analysis • Improved analyses and subsequent forecast skill, particularly in extratropical mass fields • Work is on-going to apply TLNMC to regional applications & domains (Dave Parrish – NCEP) • Initial attempts based upon Briere, S., 1982: “Nonlinear Normal Mode Initialization of a Limited Area Model”, MWR, vol 110, 1166-1186. • Adequate for small domains: success with assimilation of radar radial velocities • Apparent issues with larger domains: variation of map factor/Coriolis