1 / 6

Diagnostic verification and verification of extremes

Diagnostic verification and verification of extremes. Hodge-podge? Goal : Provide verification information that is meaningful for a variety of users Special methods for extremes Emerging area; several new approaches. Presentations. Laurie Wilson : “Approaches to Diagnostic Verification”

peta
Download Presentation

Diagnostic verification and verification of extremes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Diagnostic verification and verification of extremes • Hodge-podge? • Goal: Provide verification information that is meaningful for a variety of users • Special methods for extremes • Emerging area; several new approaches

  2. Presentations • Laurie Wilson: “Approaches to Diagnostic Verification” • Barbara Casati: “Verification of Extremes”

  3. Questions for discussion • Aggregation of verification results can give more robust summary results, while segregation of the samples into more homogeneous sub-samples can tease out the forecast performance for different regimes or types of events. Is there a best way to segregate and aggregate? How many samples are needed to give robust results? • Some information on uncertainty (e.g., confidence intervals or significance tests) should accompany verification results, especially when comparing forecast systems. What is the best way to obtain and present this information?

  4. How can uncertainty estimates be generated for very large datasets? How should spatial and temporal autocorrelation be handled, and are there obvious situations when this isn’t an issue? • For verifying rare events there is a trade-off between representing the severity/rarity of the event using some threshold, and having enough samples to give robust verification results? How can we best choose an appropriate threshold for a rare event?

  5. What role does verification of quantiles play in diagnostic and extreme/rare event verification? • How can extreme value theory be used to improve the verification of extreme/rare events?

  6. Should cost/loss and value issues be considered in the evaluation of forecasts for extreme/rare events? If so, how should this be done? • What granularity of forecast and observation data (i.e., spatial and temporal resolution) is required to apply diagnostic approaches and methods for evaluation of extremes?

More Related