220 likes | 319 Views
GPP/RE discussion. Who am I: Ankur Desai National Center for Atmospheric Research What we’re doing Initial results What we have What should we do Plan of action. What we’re doing.
E N D
GPP/RE discussion • Who am I: • Ankur Desai • National Center for Atmospheric Research • What we’re doing • Initial results • What we have • What should we do • Plan of action
What we’re doing • Using same dataset from NEE gap filling (12 site-years, 51 scenarios/site) to comparing GPP/RE across methods • 10 of 15 methods produce GPP/RE • Neural networks and tables do not? • 9 of these analyzed so far • With variants, currently at 19 analyzed • Unlike NEE, no benchmark • However, BETHY model is part of group • Can use BETHY as benchmark • Because I’m lazy
What we’re doing • Hypotheses: • Intramethod variability in GPP/RE < Intermethod variability for any site • i.e., insensitive to most gaps • Site GPP/RE estimates vary < 20% across methods • Using BETHY as model benchmark, mean of other methods is similar to BETHY GPP/RE • More sophisticated methods have less difference to BETHY GPP/RE than simpler methods • Variability in GPP/RE < GPP or RE • Others?
Initial results • To date, all datasets have been processed and put in common binary format • Daily and annual sums of GPP,RE computed • Mean and variance across 51 replicates computed and across methods computed • Diagnostic plots made • Box plots look at GPP/RE across methods (letters) , replicates (gray bars), mean and st.dev (+) across methods and range (box) • Colors delineate method type
Initial results • Other plots • GPP/RE plots • Cumulative 2-week smoothed GPP (negative values) and RE (positive values) to look at when methods diverge for a site • Line is method mean, shadow is variance across replicates • Similar plots made for growing season (mid-May to mid-Sept) and dormant season (all other months) • Benchmark plots look at similar data as percent different from BETHY full run
What do we have • A lot of messy plots! • Data reduction is hard (5 GB of data!) • Tables might work better? • Have not done hypothesis testing yet • Generally looks like intermethod var. is higher than 20% in some cases and biases in some methods compared to BETHY • Intramethod var. << intermethod var. • Will we be able to tease mechanistic reasons for method differences from this analysis? • Can we make any recommendations?
What should we do • Other hypotheses • Other kinds of benchmarks/models • Other kinds of comparisons • Artificial data? • Technical approach • Different kinds of figures • Different analysis techniques / stats • Philosophical questions • Is this manuscript worthy?
Plan of action • Get all data! • mixed gap runs - late oct • no il1, it3_2001 • Jens to give Dave BETHY data (all sites), Dave to corrupt, Antje to gapify (10 mixed scenarios 35% missing + 0% missing) - next 3-4 mos • 10 Mixed gaps only + r0 • Fill and decompose as you would when publishing GPP/RE for your sites - final sets Spring 07 • Run other benchmarks, tests if needed • Running corrupt data through methods • Seasonal diurnal plots • ANOVA of GPP or RE for site x method • Find independent data (chamber, inventory, etc…) • Share data • Discuss - Desai to create wiki • Write a manuscript - Dec. • Delegate tasks