1 / 47

Northwest Regional Modeling Consortium June 2017

This conference focused on improving wind prediction accuracy by adjusting parameters in the modeling process, aiming to minimize bias at high wind speeds. Presentations included methods to enhance subgrid terrain variance and adjust friction velocity for better results. Recommendations were made based on experimental statistics for various domain configurations.

Download Presentation

Northwest Regional Modeling Consortium June 2017

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Northwest Regional Modeling ConsortiumJune 2017

  2. Major Improvement on 2/2/17 • Beginning with the 2017020300 run switched to using diff_opt=2 (3D horizontal diffusion) and have turned off 6th-order diffusion (diff_6th_opt=6). • Reduced problem of vertical mixing near topographic barriers. • The model is much less diffusive now, allowing colder air to remain in valleys and gaps. • Allows stagnant air in valleys better. • Better wind and snow prediction in Gorge and Portland

  3. No Negative Impacts on Skill or Reliability

  4. Dealing with the the high speed bias at low wind speeds • Did not go away with reduced diffusion • No PBL scheme fixes it • Thus, have returned to adding a surface drag parameterization that considers sub-grid drag

  5. Lowering the High Wind Speed Bias for Low Wind Speeds • Hypothesis: subgrid-scale roughness in terrain are not sufficiently considered. Add it. • We did this before in WRF versions 3.1.1 to 3.4 • Modified the friction velocity, u*, in the PBL scheme • Only modify u* for momentum flux, not for scalar fluxes • Revisit this approach with WRF 3.7.1 and in light of our changes in diffusion • Test multiple methods.

  6. Lowering the High Wind Speed Bias for Low Wind Speeds • Add a multiplication factor on u*, called VFAC • VFAC varies with the subgrid-scale terrain variance • VFAC is set to 1 when: • Model wind speed exceeds 18 knots • Model stability drops below 0.5K/km in the lowest 500m (change in virtual potential temperature from the bottom level to the level closest to 500m) • The idea: if we are well-mixed, vertical momentum transport is expected and effects of surface features are less.

  7. Approaches to Lowering the High Wind Speed Bias for Low Wind Speeds • Step 1: Improve subgrid terrain variance • WRF 4-km Standard Deviation of Subgrid-scale Orographic Variance [sqrt(VAR_SSO)] • WRF 4-km Terrain Northwest Regional Modeling Consortium Meeting, June 2017

  8. Approaches to Lowering the High Wind Speed Bias for Low Wind Speeds • WRF 4/3-km Standard Deviation of Subgrid-scale Orographic Variance [sqrt(VAR_SSO)] • WRF 4/3-km Terrain Northwest Regional Modeling Consortium Meeting, June 2017

  9. Lowering the High Wind Speed Bias for Low Wind Speeds Step 2: Make changes to YSU PBL scheme: • Introduce a multiplication factor, VFAC, on u* (friction velocity) in vertical momentum flux • Scale VFAC based on subgrid-scale terrain variability (var_sso) • Set VFAC back to a value of 1.0 for speeds exceeding a threshold value (10 knots and 18 knots tried) • Set VFAC back to a value of 1.0 for lower stability • (when dθv/dz > 1K/km in the bottom 500m) • Step 3: Run experiments and gather statistics Northwest Regional Modeling Consortium Meeting, June 2017

  10. VFAC for 4-km Domains Northwest Regional Modeling Consortium Meeting, June 2017

  11. VFAC for 4/3-km Domains Northwest Regional Modeling Consortium Meeting, June 2017

  12. Subset of Experiments • Nov16 – Current Configuration • Coarse VAR – Uses VFAC based on the coarse VAR field, with stability and wind speed thresholds. • Constant VFAC – Uses a constant value of 1.575 (empirically derived from previous work) for VFAC, along with the stability and wind speed thresholds. • Fine VAR_SSO – Uses the fine-resolution subgrid scale orographic variance and a range of 1-1.575 for VFAC based on the square root of VAR_SSO, along with the stability and wind speed thresholds. Northwest Regional Modeling Consortium Meeting, June 2017

  13. Current Current Coarse VAR Coarse VAR Constant VFAC Constant VFAC Fine VAR_SSO Fine VAR_SSO Current Current Coarse VAR Coarse VAR Constant VFAC Constant VFAC Fine VAR_SSO Fine VAR_SSO Experiment Statistics – Mean Absolute Error Northwest Regional Modeling Consortium Meeting, June 2017

  14. Experiment Statistics – Mean Absolute Error Northwest Regional Modeling Consortium Meeting, June 2017

  15. Experiment Statistics – Mean Error Northwest Regional Modeling Consortium Meeting, June 2017

  16. More Experiments and Graphics:http://www.atmos.washington.edu/~ovens/winterustar • Mean Error for All Fields, Seasons • Mean Absolute Error for All Fields, Seasons

  17. Current (nov16)

  18. Fine VAR_SSO

  19. Coarse VAR

  20. Constant VFAC

  21. Mean Error All Sites and All Speeds 12Z 4-km Domain Nov16 -- Current Model Configuration

  22. Mean Error All Sites and All Speeds 12Z 4-km Domain Fine VAR_SSO

  23. Constant

  24. Mean Error All Sites and All Speeds 0Z 4-km Domain Nov16 -- Current Model Configuration

  25. Mean Error All Sites and All Speeds 0Z 4-km Domain Fine VAR_SSO

  26. Mean Error All Sites, Speeds 3 kts or Less 12Z 4-km Domain Nov16 -- Current Model Configuration

  27. Mean Error All Sites, Speeds 3 kts or Less 12Z 4-km Domain Fine VAR_SSO

  28. Mean Error All Sites, Speeds 3 kts or Less 0Z 4-km Domain Nov16 -- Current Model Configuration

  29. Mean Error All Sites, Speeds 3 kts or Less 12Z 4-km Domain Fine VAR_SSO

  30. Recommendation • Fine VAR_SSO – Uses the fine-resolution subgrid scale orographic variance and a range of 1-1.575 for VFAC based on the square root of VAR_SSO, along with the stability and wind speed thresholds

  31. Next Major Project: 4 km ensembles • Alternative could be adding effects of wildfire smoke to WRF. • For ensemble, propose doing the ensemble once a day (0000 UTC) cycle, out to 84h • Would be ready by time folks came in during the morning.

  32. How many can we fit in?

  33. 4-km Ensemble Timing Tests – 4-km Domain Complete to Hour 84 *Time available between 4/3-km finishing and 36/12/4-km starting varies from 5 hours 45 minutes to 6 hours 30 minutes. Hence, frequent delays to next cycle possible. Northwest Regional Modeling Consortium Meeting, June 2017

  34. Proposal • Start with 12 members, 4-km, 84h for 0000 UTC run • Initializations • GFS, UKMET, NAM, CMC, GASP, JMA, GEFS • Stochastic physics • Ready by 7 AM. • Variety of products • Could expand later if consortium finds useful.

  35. The End

  36. Extra Slides

  37. Mean Error All Sites and All Speeds 12Z 4-km Domain Constant VFAC

  38. Mean Error All Sites and All Speeds 0Z 4-km Domain Constant VFAC

  39. Mean Error All Sites, Speeds 3 kts or Less 12Z 4-km Domain Constant VFAC

  40. Mean Error All Sites, Speeds 3 kts or Less 0Z 4-km Domain Constant VFAC

  41. Ensemble • Here are the numbers I got from my 84-hour 36/12/4-km tests based on the 2017042412 case. The 4 machine, 96-cpu results got me all confused, since they were slower than the 4 machine 80-cpu results. I have not rerun these tests since David Warren's cache-clearing fix. Machines # of Cpus Timings -------- ------ ------------- n70 24 cpu actual = 5:26 projected = 5:33 n59 20 cpu actual = ???? projected = 6:04 (2 x 00-42) n60 20 cpu actual = ???? projected = 6:04 (2 x 00-42) n60,n59 40 cpu actual = 3:16:53 n70,n69 48 cpu actual = 2:58:19 n60,59, 58,57 80 cpu = 4861, 1:21:20 n70,69, 68,67 96 cpu = 5994, 1:41:10 ?? more cpus, slower ?? n70,66, 65,64 96 cpu = 6016, 1:41:37 - this is confirmed after full various 168 cpus = 1:30 (8 on n51, 20 on n52-n59) Our window from when the 0Z 4/3-km finishes (around 1:51 am/pm to 2:45 am/pm) until the 12Z 36/12/4-km starts (8:30 am/pm) ranges from about 6 hours 40 minutes to only 5 hours 45 minutes By using a single machine for a single ensemble member, we should just barely have enough time to finish 20 members on the 0Z cycle. We may, from time to time during heavy rain periods (when the microphysics and explicit precip slow down the 4/3-km domain by up 30 minutes), need to delay the start of the 12Z run by 15-30 minutes. But, I don't think that would happen more than about once a month, and I actually should be able to work around that by switching the 36/12/4-km to running on the newer faster, 24-cpu machines. I don't use all machines for the 36/12/4-km run, since this creates halo issues that slow the model down. David

  42. Model Reliability Statistics – 4/3-km Domain Complete to Hour 72 • From 0Z December 1, 2016 through 0Z May 30, 2017 • nov16 configuration • 361 total runs • On Time is defined as done by 1:45 am/pm PST or 2:45 PDT Northwest Regional Modeling Consortium Meeting, June 2017

More Related