180 likes | 329 Views
Nathalie Voisin 1 , Florian Pappenberger 2 , Dennis Lettenmaier 1 , Roberto Buizza 2 , and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather Service – NOAA European Geophysical Union General Assembly , May 5 2010.
E N D
Nathalie Voisin1, Florian Pappenberger2, Dennis Lettenmaier1, Roberto Buizza2, and John Schaake3 1 University of Washington 2 ECMWF 3 National Weather Service – NOAA European Geophysical Union General Assembly , May 5 2010 Application of a global probabilistic hydrologic forecast system to the Ohio River Basin
Background Existing Flood Alert Systems in mostly-ungauged basins Limpopo 2000 Early Flood Alert System for Southern Africa (Artan et al. 2001) South Asia 2000 Mekong River Commission – basin wide approach for flood forecasting Bangladesh 2004 (Hopson and Webster 2010)* Horn of Africa 2004 (Thiemig et al. 2010, EU - AFAS)* Zambezi 2001,2007,2008 (EU-AFAS, in process)* * Ensemble flow forecasting
Objective Develop a medium range probabilistic quantitative hydrologic forecast system applicable globally: • Using only (quasi-) globally available tools: • Global Circulation Model ensemble weather forecasts • High spatial resolution satellite-based remote sensing • Using a semi distributed hydrology model • applicable for different basin sizes, not basin dependent • flow forecasts at several locations within large ungauged basins • Daily time steps, up to 2 weeks lead time • Reliable and accurate for potential real time decision in areas with no flood warning system, sparse in situ observations (radars, gauge stations, etc) or no regional atmospheric model.
Forecast scheme Voisin et al. (2010, in review) Today Initial State
Science Questions • What is the forecast skill of the system? • What are the resulting hydrologic forecast errors related to errors in the calibrated and downscaled weather forecasts? • Is the forecast skill different for basins of different size?
Ensemble precipitation forecast calibration and downscaling • Analog method vs interpolation: • - maintained resolution & discrimination • slightly lower predictability • BUT largely improved reliability • smaller mean error • more realistic precipitation patterns
Reference (substitute for observations, Climatology) Forecast – Clim & null Precip 4 15-member ensemble, 15-day daily forecast: Day 1-10: ECMWF EPS fcst Day 11-15: Zero precip. 1 Deterministic 15-day daily fcst Day 1-15: - Zero precip. ECMWF analysis fields: with TMPA precipitation Daily, 2003-2007 period, 0.25 degree 2 3 ECMWF EPS fcst Interpolated to .25o ECMWF EPS fcst Calibrated & downscaled ( analog method) … Initial hydrologic state VIC 2003-2007 period VIC 15-day simulation … Daily 2003-2007 simulated runoff, soil moisture, SWE Substitute for observed runoff deterministic 15-day distributed runoff forecast 15-member ensemble 15-day distributed runoff forecast … Initial flow conditions Routing model 15-day simulation Routing model 2003-2007 period … 15-member ensemble 15-day flow forecast at 4 stations with different drainage areas 15-day deterministic flow forecast at 4 stations with different drainage areas 2003-2007 simulated daily flow Substitute for observations
Calibration of the hydrology and routing models • → Use “simulated observed flow” as reference • (ECMWF Analysis and TMPA precipitation) • →Focus on weather forecasts errors • No flow observation uncertainties • No hydrology model and routing model ( structure, parameter estimation) uncertainties
Verification of ensemble runoff forecasts Ohio River Basin 2003-2007 1826 15-day forecasts (10 day fcst, +5 days 0-precip) 848 0.25o grid cells
Ensemble flow forecasts verification Ensemble reliability at Metropolis and Elizabeth
Conclusions A preliminary probabilistic quantitative hydrologic forecast system for global application was developed and evaluated: • Skill for 10 days for spatially distributed runoff • Skill for 1-12+ day forecasts depending on concentration times at the flow forecast locations • For small basins : skills for 10 days, with good reliability for short lead times • For larger basins: for 10 days + concentration time • Ensemble weather forecasts need to be calibrated: • for better hydrologic probabilistic forecasts ( reliability ) • For better forecast accuracy in sub basins locations • Will incorporate PUB and HEPEX results and ideas. ( PUB: Predictions in Ungauged Basins HEPEX: Hydrologic Ensemble Prediction Experiment)
Forecast Verification Which forecasts? • Spatially distributed ensemble runoff forecasts • Ensemble flow forecasts at 4 locations Verification: Deterministic Forecast Skill Measures: • Bias ( accuracy, mean errors) • RMSE (accuracy) • Correlation (accuracy, predictability) Probabilistic Forecasts Skill Measures: • Continuous Rank Probability Skill Score (accuracy, reliability, resolution, predictability) • Rank Histograms ( ensemble spread i.e. probabilistic forecast reliability) For forecast categories: What can I expect when a forecast falls in a certain forecast category? ( oriented for real-time decision )
Calibration of the hydrology and routing models • -Differences between TMPA and observed precipitation • -Daily flow fluctuations due to navigation, flood control, hydropower generation • Uncertainties in VIC and routing models physical processes, structure and parameters → Use “simulated observed flow” as reference →Focus on weather forecasts errors
Ensemble forecast verification Relative Operating characteristic (ROC) Plot Hit Rate vs. False Alarm Rate for a set of increasing probability threshold to make the yes/no decision. Diagonal = no skill Skill if above the 1:1 line Measure resolution A bias forecast may still have good discrimination.
Ensemble Forecast Verification Ensemble reliability: • Reliability plot: PROBABILISTIC fcsts • Choose an event = event specific • Each time the event was forecasted with a specific probability ( 20%, 40%, etc), how many times did it happen ( observation >= chosen event). It requires a sharpness diagram to give the confidence in each point. It should be on a 1:1 line. • Talagrand diagram (rank):PROBABILISTIC QUANTITAVE fcsts • Give a rank to the observation with respect to the ensemble forecast ( 0 if obs below all ensemble members, Nmember + 1 if obs larger ) • Is uniform if ensemble spread is reliable, (inverse) U-shaped if ensemble is too small (large), asymetric is systematic bias.
Continuous Rank Probability Score • Probabilistic quantitative forecast verification • measures the difference between the predicted and observed cumulative distribution functions: resolution, reliability, predictability • For one forecast(gridcell, lead time, t): ∆PN2 1 dNmember ProbFcst d3 d2 1 1 1 d1 ∆P12 0 magnitude