170 likes | 285 Views
Probabilistic Weather and Climate Forecasting Nir Krakauer Department of Civil Engineering and CUNY Remote Sensing of the Earth Institute, The City College of New York nkrakauer@ccny.cuny.edu. In this talk. Motivating probabilistic forecasts Quantifying forecast skill Applications to
E N D
Probabilistic Weather and Climate Forecasting Nir Krakauer Department of Civil Engineering and CUNY Remote Sensing of the Earth Institute, The City College of New York nkrakauer@ccny.cuny.edu
In this talk • Motivating probabilistic forecasts • Quantifying forecast skill • Applications to • seasonal forecasting • solar forecasts
Three kinds of forecasts • Deterministic (point) forecasts • "Partly cloudy, high of …" • How much confidence should we have in this? The forecast doesn't tell us; we must rely on our intuition/experience. • Partly probabilistic forecasts • "40% chance of precipitation" • How much? When? • Fully probabilistic forecasts • Distribution functions or an ensemble of possible outcomes • If well calibrated, can be used directly in scenario modeling and optimization
Information in a probabilistic forecast • How much would we need to be told so that we know the outcome? • Information theory (Shannon 1948): • Suppose one of n outcomes must happen, for which we assign probability pi • If we learn that this outcome did happen, we've learned log(pi) bits • Summed over possible outcomes, our expected missing information is
How useful is a forecast? • Suppose that we learn that outcome i took place • Under our baseline ignorance (e.g. climatology), the probability of i was pi • Suppose a forecaster had given the outcome a probability qi instead. Intuitively, the forecast proved useful if qi > pi. • The information gain from the forecast is log(qi / pi)
A forecaster's track record • Across multiple forecast verifications, the average information content of forecasts is given by the average log(qi / pi) • Best case is to assign probability 1 to something that does happen: log(1 / pi) bits gained • Assigning zero probability to something that does happen ruins a forecaster's track record [log(0)] • Information (in bits) can be converted to a forecast skill score (1 for a perfect forecast)
Generalization to continuous variables • If x is the outcome and q, p are probability densities, the information gain is log(q(x)/p(x)) • If the forecast was Gaussian with mean m and SD σ, and the climatology had mean m0 and SD σ0, the information gain is (z2 – z02)/2 - log(σ/σ0), where z = (x – m)/σ
Seasonal forecasting • Based on known sources of persistence, particularly ENSO • Probabilistic USA forecasts for T and P tercile issued by NOAA CPC since 1990s • Potentially valuable for agricultural, water management
Diagnosing bias • Confidence is how much skill a forecast claims to have (relative to climatology) • If the forecast is well-calibrated, this should be similar to the information gain estimated by comparing forecasts to outcomes • It turns out CPC temperature forecasts are overconfident (claim 0.014 bits, actual 0.024 bits info gain), but with geographic variability
Improving on existing forecasts • It turns out that CPC's forecasts underestimate the impact of warming and precipitation change • Naive Bayesian combination of CPC's probabilities with a trend estimate based on an exponentially weighted moving average resulted in much higher skill and more consistency across regions • Other model-combination techniques now being tested
From a point forecast to a distribution • Unconditional probability distribution (climatology) • Climatology conditioned on e.g. modeled cloudy conditions (sharper/more informative than unconditional to the extent the model and reality have considerable mutual information)
Example application • New York City area (41°N, 74°W) • Observations: Cloud optical depth from a satellite product (ISCCP, 30 km resolution) • Model: NCEP NAM analysis and 24 hour forecasts (12 km grid) • Every 3 hours, daytime, 2005-2007 • Consider 1 – exp(–COD), discretized into 10 cloudiness categories
Preliminary results Conditional: clear prediction Unconditional Conditional: cloudy prediction Some (but surprisingly limited) ability of forecasts to inform cloudiness expectations
Next steps • Better / more relevant observations (relevant to power output of installations of interest) • More extensive forecast information (atmospheric profile, aerosols); cloud tracking for very short term forecasts • Generating ensembles of possible spatial insolation fields
Summary • Probabilistic forecasts provide explicit measures of uncertainty, necessary for various complex management applications • More work needed to make use of existing forecast systems in a probabilistic framework "A person with a clock always knows what time it is; a person with two clocks is never sure."
Questions? nkrakauer@ccny.cuny.edu