700 likes | 910 Views
Probabilistic Prediction. Cliff Mass University of Washington. Uncertainty in Forecasting. Most numerical weather prediction (NWP) today and most forecast products reflect a deterministic approach.
E N D
Probabilistic Prediction Cliff Mass University of Washington
Uncertainty in Forecasting • Most numerical weather prediction (NWP) today and most forecast products reflect a deterministic approach. • This means that we do the best job we can for a single forecast and do not consider uncertainties in the model, initial conditions, or the very nature of the atmosphere. • However, the uncertainties are usually very significant and information on such uncertainty can be very useful.
A Fundamental Issue • The work of Lorenz (1963, 1965, 1968) demonstrated that the atmosphere is a chaotic system, in which small differences in the initialization, well within observational error, can have large impacts on the forecasts, particularly for longer forecasts. • In a series of experiments he found that small errors in initial conditions can grow so that all deterministic forecast skill is lost at about two weeks.
Butterfly Effect: a small change at one place in a complex system can have large effects elsewhere
Uncertainty Extends Beyond Initial Conditions • Also uncertainty in our model physics. • such as microphysics and boundary layer parameterizations. • And further uncertainty produced by our numerical methods.
Probabilistic NWP • To deal with forecast uncertainty, Epstein (1969) suggested stochastic-dynamic forecasting, in which forecast errors are explicitly considered during model integration. • Essentially, uncertainty estimates are added to each term in the primitive equations. • This stochastic method was not and still is not computationally practical.
Probabilistic-Ensemble Numerical Prediction (NWP) • Another approach, ensemble prediction, was proposed by Leith (1974), who suggested that prediction centers run a collection (ensemble) of forecasts, each starting from a different initial state. • The variations in the resulting forecasts could be used to estimate the uncertainty of the prediction. • But even the ensemble approach was not possible at this time due to limited computer resources. • Became practical in the late 1980s as computer power increased.
Ensemble Prediction • Can use ensembles to estimate the probabilities that some weather feature will occur. • The ensemble mean is more accurate on average than any individual ensemble member. • Forecast skill of the ensemble mean is related to the spread of the ensembles • When ensemble forecasts are similar, ensemble mean skill tend to be higher. • When forecasts differ greatly, ensemble mean forecast skill tends to be less.
An analysis produced to run an NWP model is somewhere in a cloud of likely states. Any point in the cloud is equally likely to be the truth. 48h forecast T T T T T 24h forecast 12h forecast 36h forecast 12h observation 24h observation The true state of the atmosphere exists as a single point in phase space that we never know exactly. 36h observation 48h observation P H A S S P E A C E Deterministic Forecasting Nonlinear error growth and model deficiencies drive apart the forecast and true trajectories (i.e., Chaos Theory) A point in phase space completely describes an instantaneous state of the atmosphere. (pres, temp, etc. at all points at one time.)
T T Analysis Region (analysis PDF) P H A S S P E A 48h Forecast Region (forecast PDF) C E Ensemble Forecasting, a Stochastic Approach An ensemble of likely analyses leads to an ensemble of likely forecasts Ensemble Forecasting: • Encompasses truth • Reveals flow-dependent uncertainty • Yields objective stochastic forecast
Probability Density Functions • Usually we fit the distribution of ensemble members with a gaussian or other reasonably smooth theoretical distribution as a first step General Examination Presentation
A critical issue is the development of ensemble systems that create probabilistic guidance that is both reliable and sharp.We Need to Create Probability Density Functions (PDFs) of Each Variable That have These Characteristics
Elements of a Good Probability Forecast: • Sharpness (also known as resolution) • The width of the predicted distribution should be as small as possible. Probability Density Function (PDF) for some forecast quantity Sharp Less Sharp
Elements of a Good Probability Forecast • Reliability (also known as calibration) • A probability forecast p, ought to verify with relative frequency p. • Forecasts from climatology are reliable (by definition), so calibration alone is not enough. Reliability Diagram
Verification Rank Histogram(a.k.a., Talagrand Diagram)-Another Measure of Reliability Over many trials, record verification’s position (the “rank”) among the ordered EF members. Reliable EF Under-Spread EF Over-Spread EF Frequency Cumulative Precip. (mm) EF PDF (curve) & 8 sample members (bars) True PDF (curve) & verification value (bar)
Brier Skill Score (BSS)directly examines reliability, resolution, and overall skill by Discrete, Contiguous Bins Decomposed Brier Score (reliability, rel) (resolution, res) (uncertainty, unc) I : number of probability bins (normally 11) N : number of data pairs in the bin : binned forecast probability (0.0, 0.1,…1.0 for 11 bins) oi : observed relative frequency for bin i o : sample climatology (total occurrences / total forecasts) Brier Skill Score Brier Skill Score′ 0 0 ADVANTAGES: 1) No need for long-term climatology 2) Can compute and visualize inreliability diagram BSS = 1 for perfect forecasts BSS < 0 for forecasts worse than climo Continuous Brier Score M : number of fcst/obs pairs : forecast probability {0.0…1.0} oj: observation {0.0=no, 1.0=yes} BS = 0 for perfect forecasts BS = 1 for perfectly wrong forecasts
Probabilistic Information Can Produce Substantial Economic and Public Protection Benefits
There is a decision theory on using probabilistic information for economic savings C= cost of protection L= loss if a damaging event occurs Decision theory says you should protect if the probability of occurrence is greater than C/L
Optimal Threshold = 15% Decision Theory Example Forecast? YES NO Critical Event: surface winds > 50kt Cost (of protecting): $150K Loss (if damage ): $1M C/L = .15 (15%) Hit False Alarm Miss Correct Rejection YES NO $150K $1000K Observed? $150K $0K
Early Forecasting Started Probabilistically!!! • Early forecasters, faced with large gaps in their young science, understood the uncertain nature of the weather prediction process and were comfortable with a probabilistic approach to forecasting. • Cleveland Abbe, who organized the first forecast group in the United States as part of the U.S. Signal Corp, did not use the term “forecast” for his first prediction in 1871, but rather used the term “probabilities,” resulting in him being known as “Old Probabilities” or “Old Probs” to the public.
“Ol Probs” • Professor Cleveland Abbe, issued the first public “Weather Synopsis and Probabilities” on February 19, 1871 • A few years later, the term indications was substituted for probabilities, and by 1889 the term forecasts received official approval(Murphy 1997).
History of Probabilistic Prediction • The first modern operational probabilistic forecasts in the United States were produced in 1965. These forecasts, for the probability of precipitation, were produced by human weather forecasters and thus were subjective probabilistic predictions. • The first objective probabilistic forecasts were produced as part of the Model Output Statistics (MOS) system that began in 1969.
NOTE: Model Output Statistics (MOS) • Based on simple linear regression with 12 predictors. • Y = a0 +a1X1 + a2X2 + a3X3 + a4X4 …
Ensemble Prediction • Ensemble prediction began an NCEP in the early 1990s. ECMWF rapidly joined the club. • During the past decades the size and sophistication of the NCEP and ECMWF ensemble systems have grown considerably, with the medium-range global ensemble system becoming an integral tool for many forecasters. • Also during this period, NCEP has constructed a higher resolution, short-range ensemble system (SREF) that uses breeding to create initial condition variations.
Example: NCEP Global Ensemble System • Begun in 1993 with the MRF (now GFS) • First tried “lagged” ensembles as basis…using runs of various initializations verifying at the same time. • Then used the “breeding” method to find perturbations to the initial conditions of each ensemble members. • Breeding adds random perturbations to an initial state, let them grow, then reduce amplitude down to a small level, lets them grow again, etc. • Give an idea of what type of perturbations are growing rapidly in the period BEFORE the forecast. • Does not include physics uncertainty. • Now replaced by Ensemble Transform Filter Approach
NCEP Global Ensemble • 20 members at 00, 06, 12, and 18 UTC plus two control runs for each cycle • 28 levels • T190 resolution (roughly 80km resolution) • 384 hours • Uses stochastic physics to get some physics diversity
ECMWF Global Ensemble • 50 members and 1 control • 60 levels • T399 (roughly 40 km) through 240 hours, T255 afterwards • Singular vector approach to creating perturbations • Stochastic physics
Several Nations Have Global Ensembles Too! • China, Canada, Japan and others! • And there are combinations of global ensembles like: • TIGGE: Thorpex Interative Grand Global Ensemble from ten national NWP centers • NAEFS: North American Ensemble Forecasting System combining U.S. and Canadian Global Ensembles
‘Ensemble Spread Chart • “best guess” = high-resolution control forecast or ensemble mean • ensemble spread = standard deviation of the members at each grid point • Shows where “best guess” can be trusted (i.e., areas of low or high predictability) • Details unpredictable aspects of waves: amplitude vs. phase Global Forecast System (GFS) Ensemble http://www.cdc.noaa.gov/map/images/ens/ens.html
Current Deterministic Meteogram FNMOC Ensemble Forecast System (EFS) https://www.fnmoc.navy.mil/efs/efs.html Meteograms Versus “Plume Plots” • Data Range = meteogram-type trace of each ensemble member’s raw output • Excellent tool for point forecasting, if calibrated • Can easily (and should) calibrate for model bias • Calibrating for ensemble spread problems is difficult • Must use box & whisker, or confidence interval plot for large ensembles
Box and Whisker Plots http://www.weatheroffice.gc.ca/ensemble/index_naefs_e.html
Extreme Max Mean 90% CI Extreme Min Gray shaded area is 90% Confidence Interval (CI) Misawa AB, Japan AFWA Forecast Multimeteogram JME Cycle: 11Nov06, 18Z RWY: 100/280 15km Resolution Wind Speed (kt) 11/18 12/00 06 12 18 13/00 06 12 18 14/00 06 Valid Time (UTC) Wind Direction
Verification Postage Stamp Plots SLP and winds • Reveals high uncertainty in storm track and intensity • Indicates low probability of Puget Sound wind event 1: cent 5: ngps 11: ngps* 8: eta* 2: eta 3: ukmo 6: cmcg 9: ukmo* 12: cmcg* 4: tcwb 7: avn 13: avn* 10: tcwb*
A Number of Nations Are Experimenting with Higher-Resolution Ensembles
European MOGREPS • 24 km resolution • Uses ETKF for diversity breeding) • Stochastic physics
NCEP Short-Range Ensembles (SREF) • Resolution of 32 km • Out to 87 h twice a day (09 and 21 UTC initialization) • Uses both initial condition uncertainty (breeding) and physics uncertainty. • Uses the Eta and Regional Spectral Models and recently the WRF model (21 total members)
SREF Current System Model Res (km) Levels Members Cloud Physics Convection RSM-SAS 45 28 Ctl,n,p GFS physics Simple Arak-Schubert RSM-RAS 45 28 n,p GFS physics Relaxed Arak-Schubert Eta-BMJ 32 60 Ctl,n,p Op Ferrier Betts-Miller-Janjic Eta-SAT 32 60 n,p Op Ferrier BMJ-moist prof Eta-KF 32 60 Ctl,n,p Op Ferrier Kain-Fritsch Eta-KFD 32 60 n,p Op Ferrier Kain-Fritsch with enhanced detrainment PLUS * NMM-WRF control and 1 pert. Pair * ARW-WRF control and 1 pert. pair
The UW Ensemble System • Perhaps the highest resolution operational ensemble systems are running at the University of Washington • UWME: 8 members at 36 and 12-km • UW EnKF system: 60 members at 36 and 4-km