1 / 30

Overview of Satellite Based Products for Global ET

Overview of Satellite Based Products for Global ET. Matthew McCabe, Carlos Jimenez, Bill Rossow , Sonia Seneviratne , Eric Wood and numerous data providers & contributors …. “Produce a GLOBAL multi‐decadal surface flux product”

quang
Download Presentation

Overview of Satellite Based Products for Global ET

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of Satellite Based Products for Global ET Matthew McCabe, Carlos Jimenez, Bill Rossow, Sonia Seneviratne, Eric Wood and numerous data providers & contributors…

  2. “Produce a GLOBAL multi‐decadal surface flux product” • Jimenez et al (2011) “Global inter-comparison of 12 land surface heat flux estimates” JGR 116(2), D02102 • Mueller et al (2011) “Evaluation of global evapotranspiration datasets: first results of the LandFlux-EVAL project” GRL, 38, LO6402 GEWEX LANDFLUX PROJECT

  3. AVAILABLE PRODUCTS & SCALES

  4. Monin-Obukhov Similarity Theory (MOST) • considering or ignoring stability correction terms + other assumptions lead to different ET formulations (Penman, Priestley-Taylor etc…) PRODUCT SIMILARITIES

  5. Spatial – what does 0.25° (or 1°) ET represent? • Temporal – how are values scaled (consistency in EF?) • Forcing clearly plays a major role in flux variation • Note that no products use the same forcing! PRODUCT DIFFERENCES

  6. Outcome of EGU-Vienna meeting (April, 2011: • Consensus amongst several product participants • Version 0 will be a number of ‘competing’ products • Agreement to run using common forcing – but yet to agree on that format (some problematic variables) • Likely SRB 4.0 rad and associated met inputs • 3 hourly, 0.5 degree • LE & H (common methodology for G??) • When? Need to compile forcings (timing of SRB 4.0) PRODUCTION & EVALUATION

  7. Product assessment requires analysis over a range of regional and global scales: • Wood@Princeton: have compared SEBS, PM & PT using common Aqua forcing for 3 years; 1984-2007 ensemble approach globally • Fisher@JPL (planned): 5 RS-ET models and 3 LSMs using identical forcing data and model protocols. 1 km resolution, validated against 253 sites (La Thuille) • McCabe@UNSW (just started): GEWEX-RHP scale multi-model comparison with common WRF forcing • Sonia@ETH + Carlos@P-O (just heard) PRODUCT EVALUATION PLANS

  8. 0 400 Latent Heat Flux (W/m2) Using common WRF forcing: a) PM, b) SEBS, c) WRF-NOAH shows inherent model differences that need investigation MULTI-MODELS OVER THE MDB c) a) b)

  9. 600 0

  10. Using common forcing – PM, SEBS, WRF MULTI-MODELS AT TUMBARUMBA

  11. MDB INTERCOMPARISON

  12. MDB INTERCOMPARISON Dry pixel, 10km resolution

  13. MDB INTERCOMPARISON Dry pixel, 30km resolution

  14. MDB INTERCOMPARISON Dry pixel, 50km resolution

  15. MDB INTERCOMPARISON Dry pixel, 90km resolution

  16. MDB INTERCOMPARISON Dry, Semi, Wet pixels (top to bottom), 50km resolution

  17. MDB INTERCOMPARISON Observations, Tumbarumba

  18. Multi-model/multi-sensor runs from Princeton group: • SEBS, PM and modified-PT approach • ISCCP and SRB for forcing (and PU forcing) • Issues in ISCCP…so used 2 SRB data sets (SRB/SRB-QC) • daily time scale/0.5 degree/1984-2007 • Input data initially evaluated against PU 50 yr forcing data…resulting in a composite dataset MULTI-SENSOR ESTIMATES

  19. MULTI-SENSOR ESTIMATES

  20. MULTI-SENSOR ESTIMATES

  21. MULTI-SENSOR ESTIMATES

  22. Multi-model/multi-sensor runs from Princeton group: • Issues in Ts-Ta between data sets (which is right??) • Noted issues in humidity and temp trends (compared to PU data set) • Uncertainty b/w models > uncertainty in rad forcing • Largest uncertainty (absolute) in humid tropics • Largest uncertainty (mean) in transitional regions MULTI-SENSOR ESTIMATES

  23. Range of evaluation efforts being undertaken: • Tower data (where, when, number) • Basin (P-ET-dS/dt) – which P, which dS/dT, accuracy in R • Atmospheric water budgets • Hydrological consistency • Need a co-ordinated approach to product evaluation • Requires identification of high quality data-sets • Agreement on spatial and temporal scales • Metrics for assessment MODEL EVALUATION

  24. Need to examine issues of: • Model interdependencies (forcing and formulation) • Sensitivity of approaches to forcing variables • Regional scale (MDB) and daily evaluation of retrievals • - do products represent diurnal variability (3 hourly) • - are spatial patterns reflected at the regional scale • - how consistent are the retrievals with other data CONTINUED EVALUATION

  25. Significant uncertainties in radiation • Accuracy of all data sets (ET as a Level 4 product) • Consistency in ‘stable’ forcing (LULC, basin/land masks) • Aerodynamic and surface properties are key: techniques to derive these? • Spatial and temporal scales – who are the users? • Accuracy in evaluations (runoff & rainfall data-sets) • How to include snow, interception • Dual-source models and GEWEX independent datasets • Regional scale intercomparisons (diurnal scale) NEEDED DISCUSSIONS

  26. Overview of Satellite Based Products for Global ET Matthew McCabe, Carlos Jimenez, Bill Rossow, Sonia Seneviratne, Eric Wood and numerous data providers & contributors…

  27. Penman-Monteith assumptions: • sufficient upwind fetch • a uniform saturated surface • the canopy satisfies the big leaf assumption • approximate the slope of the saturation vapor pressure curve with ambient temperature and water vapor • Priestley-Taylor assumptions: • No advection, wet surface and long fetch, therefore, wind function in the Penman function tends to zero BREAKDOWN OF APPROACHES

  28. What has been determined so far? • Geographic patterns broadly consistent • Large range in some regions esp. tropics & rainforests • Range in IPCC model simulations not markedly different from other data • More detailed analysis and interpretation is required! • No common data set for algorithm inter-comparison • No benchmark with which to compare data • Inter-dependence of data sets is an issue GEWEX LANDFLUX PROJECT

  29. SEBS SEBS SEBS CLM CLM CLM NOAH NOAH NOAH AMSR-E AMSR-E AMSR-E WRF WRF WRF MDB INTERCOMPARISON 08 38 06

  30. MULTI-SENSOR ESTIMATES

More Related