150 likes | 356 Views
Cloud Clearing Round Robin. Claire Bulgin, Chris Merchant University of Reading. Session Introduction. Meeting Schedule GlobTemperature – Cloud Clearing Round Robin (CCRR) Objectives and Design. Aerosol CCI – Round Robin Objectives Cloud CCI – Benefits from participating in CCRR.
E N D
Cloud Clearing Round Robin Claire Bulgin, Chris Merchant University of Reading
Session Introduction • Meeting Schedule • GlobTemperature – Cloud Clearing Round Robin (CCRR) Objectives and Design. • Aerosol CCI – Round Robin Objectives • Cloud CCI – Benefits from participating in CCRR. • Discussion
GlobTemperature CCRR Context • Cloud clearing is a fundamental pre-processing step in land surface temperature (LST) estimation from satellite data. • Cloud detection algorithms are being developed by a number of groups using a variety of approaches. • A cloud clearing round robin encourages competition between groups to develop the best cloud detection algorithm over land. • This can then be applied in the generation of a LST climate data record from the Along Track Scanning Radiometers.
CCRR Participation • Participation from members of other projects also needing to develop or apply cloud detection techniques. Eg. Aerosol CCI, Cloud CCI, GlobSnow, GlobAlbedo. • Identification of potential participants as part of the User Requirements Survey for LST. • Raising awareness of the CCRR at meetings and conferences. • Literature review of current cloud detection techniques using visible and infrared data.
CCR Recruitment and Communication • Invitations to participate in the cloud clearing round robin will be communicated amongst the scientific community in the following ways: • GlobTemperature Website • Events • Presentations • Science Networks (eg. EarthTemp) • Direct invitation to groups actively involved in developing cloud detection over land.
Cloud Clearing Round Robin Protocol • This will define how the Cloud Clearing Round Robin will be run, and will cover the following elements. • Documents for participants. • Schedule for participation. • Definition of the protocol for algorithm selection. • Set out the rules of participation. • Define the metrics for algorithm selection. • Define the content of the Round Robin Data Package distributed to all participants.
CCRR Objective Procedure 1st Data Release 2nd Data Release Submissions Assess Submit Development/Training Dataset Test Dataset Selection Dataset Apply Apply Comparisons Cloud Mask Supplied Blind Data
CCRR Scene Selection and Classification • Cloud masks will be determined by expert inspection of nadir and forward view ATSR imagery using multi-channel and stereo methods. • 3 Datasets will be created: • Development/training • Test • Selection • Datasets will include a range of land surface types (Desert, vegetated, snow, ice), atmospheric conditions (high/low water vapor, clear and aerosol rich) and surface temperatures.
CCRR Data Provision • For each scene, we will provide all of the information required for a full Bayesian cloud detection scheme, from which algorithm developers can use some or all of the data. The fields provided will include: • ATSR Reflectance Channels • ATSR Brightness Temperatures • NWP Surface Temperature • NWP Total Column Water Vapor • Manual Cloud Mask (with the exception of the selection dataset). • Land/sea mask • Simulated reflectance and brightness temperatures (RTTOV 11.1). • Data quality information • Land cover information
Performance Metrics • Code for calculating the performance metrics (which will be used in the final algorithm selection exercise) will be provided to all participants to use in their developmental work, as part of the Round Robin Data Package. Potential performance metrics include: • Percentage of perfect classification. • Hit Rate • False Alarm Rate • True Skill Score • LST Impacts • Performance under clear conditions vs. high aerosol loading. • Performance over cold/reflective surfaces. Eg. Snow, ice, desert.
Round Robin Data Package • This will be distributed to all participants and will include: • A description of the round robin protocol. • The round robin datasets (distributed according to the schedule). • A description of the datasets. • Software for calculating performance metrics (identical to that to be employed in the algorithm selection process). • Distribution will be done via a zipped tar file from a FTP site.
Development and Submission Support • Development • Following the distribution of the Round Robin Data Package, participants will be given support in accessing and using the data. • Responses to questions will documented on a webpage or in an ‘FAQ’ document (where not already covered in the data package documentation) to facilitate troubleshooting. • Submission • The test and selection datasets will be disseminated together in accordance with the timing specified in the schedule. • The time period between this dissemination and the submission data will be short to prevent further tuning and development of algorithms. • Re-submission will only be allowed in the event of formatting problems or corrupt data in order to maintain objectivity in the algorithm selection process.
A note on internal algorithm submission… • Algorithms submitted to the round robin from internal partners will be subject to the same protocol as external submissions. • Development will only be possible using the training dataset. • Development, test and selection datasets will be disseminated at the same time to all participants and internal parties will not have access to the selection data cloud mask to ensure objectivity.
CCRR – Algorithm Selection • Full analysis of algorithm performance including all scenes, algorithms and metrics will be extensive and therefore presented online. The selected algorithm will be implemented with the LST climate data record production chain. Performance of the selected algorithm in large-scale application will be reviewed in case there are problematic circumstances outside the range of situations covered by the round robin exercise.
Discussion Points • Scene selection – what range of scenes are required to generate results applicable to all projects? • Does there need to be an expert aerosol mask as well as a cloud mask? • Are more metrics required that are aerosol/cloud specific? • How will the workload be spread between the GlobTemperature and Aerosol CCI projects? • Are other auxiliary data fields required for aerosol classification? • Are there other avenues of communication/invitation required for the Aerosol and Cloud CCI’s?