800 likes | 877 Views
IS WRF REALLY IMPROVING? A COMPREHENSIVE VERIFICATION OVER THE PACIFIC NORTHWEST. Cliff Mass and David Ovens University of Washington. A lot of effort has been expended …. We have all worked hard over the past ten years transitioning from MM5 to WRF.
E N D
IS WRF REALLY IMPROVING? A COMPREHENSIVE VERIFICATION OVER THE PACIFIC NORTHWEST Cliff Mass and David Ovens University of Washington
A lot of effort has been expended… • We have all worked hard over the past ten years transitioning from MM5 to WRF. • In addition, a great deal of effort has gone into improving physics parameterizations, numerics, and adding additional modeling options.
But … • Does WRF with all its improvements verify better than MM5 for key case studies and over extended verification periods? • Do we even have the tools and capabilities to monitoring the evolving quality of our modeling systems? • Is it possible that some of the “improvements” have actually detracted from modeling system skill when used with other components?
In general, we don’t have satisfactory answers for these questions. • Neither NCEP nor DTC nor any national entity appears to have such information. • We need the mechanisms and capabilities in place to evaluate and guide our model development
What has been the results over the Pacific Northwest where much of this information is available?
Northwest U.S. MM5 and WRF • Real-time since 1995 • Now running: • MM5 (36-12 km) nested in NWS NAM • WRF ARW 3.0 (36-12-4 km) nested in NWS GFS • WRF currently uses Thompson microphysics, YSU PBL, NOAH LSM, RRTM LW, Dudhia SW, K-F PBL • MM5 uses MRF PBL, K-F. • Extensive multi-year verification on QC data. • Have run extensive tests of WRF V3.1, MM5 driven by GFS, and a collection of varying physics, including with and without LSM
The Analysis • Based on this extensive series of runs, let us try to answer (for the NW) the following questions: • What have we gained by moving to WRF? • What have we lost? • What advantages can one realize from V3.1? • Is the NOAH LSM a plus or minus for the key parameters? • Are we making progress?
0000 UTC (5 PM) MAE, Jan-Feb 2009 With LSM
1200 UTC (5 AM) MAE, Jan-Feb 2009 With LSM
What do verification scores tell us about MM5 and WRF? The LSM greatly improves the dewpoint temperature forecast. So WRF with LSM is much better for dewpoint than MM5 without. For temperature, the LSM helps in the afternoon, but hurts in the morning. WRF is better than MM5 for wind direction. For precipitation, summer is better for MM5, winter for WRF. Very little difference in wind speed.
Scatter Diagrams Can Reveal the Subtleties of Model Performance
A National Effort for WRF Verification is Required to Guide Our Work • We have pieces of the puzzle: • The Developmental Testbed Center (DTC) is a natural center for such activities. • Powerful verification capabilities have been developed (Model Evaluation Tools, MET) • We need a long-term baseline of model performance for “ best combinations” of model physics options or promising option combinations. • DTC should take on this key responsibility as an “honest” and unbiased evaluator of model performance.
What do verification scores tell us about WRF? The LSM greatly improves the dewpoint temperature forecast. For temperature, the LSM helps in the afternoon, but hurts in the morning. CAM and the new RRTMG schemes have very similar verification scores.
PBL tests with WRF 3.1 • 3 new PBL schemes (MYNN, QNSE, Pleim-Xiu) have been tested for a stable case, 17-18 Jan 2009. • Our current scheme: YSU = Yonsei University non-local-K scheme with explicit entrainment layer and a parabolic K profile. • MYNN = Mellor-Yamada Nakanishi and Nino Level 2.5 PBL. Predicts sub-grid TKE terms. • QNSE = Quasi-Normal Scale Elimination PBL. A TKE-prediction option that uses a new theory for stably stratified regions. • Pleim-Xiu = Asymmetric Convective Model with non-local upward mixing and local downward mixing. • BouLac = Bougeault-Lacarrere PBL (new in 3.1, not yet tested) designed for use with BEP urban model.
PBL tests with WRF 3.1 • The LSM is a bigger contributor to temperature errors than the PBL schemes. • No obvious improvement has been noted. • Further tests and analysis is needed.
Cases without LSM compared to the case with the best 2-m temperature score.
Cases without LSM compared to the case with the best 2-m temperature score. LSM with best 2-m Temperature
Cases without LSM compared to the case with the best 2-m temperature score.
Cases without LSM compared to the case with the best 2-m temperature score. LSM with best 2-m Temperature
2-m Temperatures, LSM vs no LSM LSM No LSM
2-m Temperatures, LSM vs no LSM LSM No LSM
Some Conclusions LSM greatly improves dewpoint temperature forecasts. LSM improves maximum temperature forecasts, but degrades minimum temperature forecasts.
LSM No LSM