300 likes | 404 Views
Overview of Satellite Based Products for Global ET. Matthew McCabe, Carlos Jimenez, Bill Rossow , Sonia Seneviratne , Eric Wood and numerous data providers & contributors …. “Produce a GLOBAL multi‐decadal surface flux product”
E N D
Overview of Satellite Based Products for Global ET Matthew McCabe, Carlos Jimenez, Bill Rossow, Sonia Seneviratne, Eric Wood and numerous data providers & contributors…
“Produce a GLOBAL multi‐decadal surface flux product” • Jimenez et al (2011) “Global inter-comparison of 12 land surface heat flux estimates” JGR 116(2), D02102 • Mueller et al (2011) “Evaluation of global evapotranspiration datasets: first results of the LandFlux-EVAL project” GRL, 38, LO6402 GEWEX LANDFLUX PROJECT
Monin-Obukhov Similarity Theory (MOST) • considering or ignoring stability correction terms + other assumptions lead to different ET formulations (Penman, Priestley-Taylor etc…) PRODUCT SIMILARITIES
Spatial – what does 0.25° (or 1°) ET represent? • Temporal – how are values scaled (consistency in EF?) • Forcing clearly plays a major role in flux variation • Note that no products use the same forcing! PRODUCT DIFFERENCES
Outcome of EGU-Vienna meeting (April, 2011: • Consensus amongst several product participants • Version 0 will be a number of ‘competing’ products • Agreement to run using common forcing – but yet to agree on that format (some problematic variables) • Likely SRB 4.0 rad and associated met inputs • 3 hourly, 0.5 degree • LE & H (common methodology for G??) • When? Need to compile forcings (timing of SRB 4.0) PRODUCTION & EVALUATION
Product assessment requires analysis over a range of regional and global scales: • Wood@Princeton: have compared SEBS, PM & PT using common Aqua forcing for 3 years; 1984-2007 ensemble approach globally • Fisher@JPL (planned): 5 RS-ET models and 3 LSMs using identical forcing data and model protocols. 1 km resolution, validated against 253 sites (La Thuille) • McCabe@UNSW (just started): GEWEX-RHP scale multi-model comparison with common WRF forcing • Sonia@ETH + Carlos@P-O (just heard) PRODUCT EVALUATION PLANS
0 400 Latent Heat Flux (W/m2) Using common WRF forcing: a) PM, b) SEBS, c) WRF-NOAH shows inherent model differences that need investigation MULTI-MODELS OVER THE MDB c) a) b)
600 0
Using common forcing – PM, SEBS, WRF MULTI-MODELS AT TUMBARUMBA
MDB INTERCOMPARISON Dry pixel, 10km resolution
MDB INTERCOMPARISON Dry pixel, 30km resolution
MDB INTERCOMPARISON Dry pixel, 50km resolution
MDB INTERCOMPARISON Dry pixel, 90km resolution
MDB INTERCOMPARISON Dry, Semi, Wet pixels (top to bottom), 50km resolution
MDB INTERCOMPARISON Observations, Tumbarumba
Multi-model/multi-sensor runs from Princeton group: • SEBS, PM and modified-PT approach • ISCCP and SRB for forcing (and PU forcing) • Issues in ISCCP…so used 2 SRB data sets (SRB/SRB-QC) • daily time scale/0.5 degree/1984-2007 • Input data initially evaluated against PU 50 yr forcing data…resulting in a composite dataset MULTI-SENSOR ESTIMATES
Multi-model/multi-sensor runs from Princeton group: • Issues in Ts-Ta between data sets (which is right??) • Noted issues in humidity and temp trends (compared to PU data set) • Uncertainty b/w models > uncertainty in rad forcing • Largest uncertainty (absolute) in humid tropics • Largest uncertainty (mean) in transitional regions MULTI-SENSOR ESTIMATES
Range of evaluation efforts being undertaken: • Tower data (where, when, number) • Basin (P-ET-dS/dt) – which P, which dS/dT, accuracy in R • Atmospheric water budgets • Hydrological consistency • Need a co-ordinated approach to product evaluation • Requires identification of high quality data-sets • Agreement on spatial and temporal scales • Metrics for assessment MODEL EVALUATION
Need to examine issues of: • Model interdependencies (forcing and formulation) • Sensitivity of approaches to forcing variables • Regional scale (MDB) and daily evaluation of retrievals • - do products represent diurnal variability (3 hourly) • - are spatial patterns reflected at the regional scale • - how consistent are the retrievals with other data CONTINUED EVALUATION
Significant uncertainties in radiation • Accuracy of all data sets (ET as a Level 4 product) • Consistency in ‘stable’ forcing (LULC, basin/land masks) • Aerodynamic and surface properties are key: techniques to derive these? • Spatial and temporal scales – who are the users? • Accuracy in evaluations (runoff & rainfall data-sets) • How to include snow, interception • Dual-source models and GEWEX independent datasets • Regional scale intercomparisons (diurnal scale) NEEDED DISCUSSIONS
Overview of Satellite Based Products for Global ET Matthew McCabe, Carlos Jimenez, Bill Rossow, Sonia Seneviratne, Eric Wood and numerous data providers & contributors…
Penman-Monteith assumptions: • sufficient upwind fetch • a uniform saturated surface • the canopy satisfies the big leaf assumption • approximate the slope of the saturation vapor pressure curve with ambient temperature and water vapor • Priestley-Taylor assumptions: • No advection, wet surface and long fetch, therefore, wind function in the Penman function tends to zero BREAKDOWN OF APPROACHES
What has been determined so far? • Geographic patterns broadly consistent • Large range in some regions esp. tropics & rainforests • Range in IPCC model simulations not markedly different from other data • More detailed analysis and interpretation is required! • No common data set for algorithm inter-comparison • No benchmark with which to compare data • Inter-dependence of data sets is an issue GEWEX LANDFLUX PROJECT
SEBS SEBS SEBS CLM CLM CLM NOAH NOAH NOAH AMSR-E AMSR-E AMSR-E WRF WRF WRF MDB INTERCOMPARISON 08 38 06