20 likes | 184 Views
GSI-Klog D æ mon. AFS Server. Tokens via GSI and SSL. Remote Site. AFS cache. AFS cache. Compute Farm. AFS cache. AFS cache. NFS Disks. AFS cache. Script sets up BFROOT; Job Runs on this node. Farm Gatekeeper. Data. Tokens via Kerberos. Experimental DATA. AFS
E N D
GSI-Klog Dæmon AFS Server Tokens via GSI and SSL Remote Site AFS cache AFS cache Compute Farm AFS cache AFS cache NFS Disks AFS cache Script sets up BFROOT; Job Runs on this node Farm Gatekeeper Data Tokens via Kerberos Experimental DATA AFS Executables, Libraries and Results are here Globus Client AFS Client AFS cache My Terminal Building the BaBarGrid for Data AnalysisFrom Distributed computing to Grid computing Mission The Data & Analysis Problem The BaBar experiment is expanding our understanding of antimatter. Running at an accelerator in Stanford, it gathers data on the decay of the rare B particles. We have now recorded over 100 million B decays. These are real events, not simulated ones (though we have those too!) and hundreds more are arriving every minute. Each provides a lot of data (20 Mb+). This data is studied by a team of over 500 physicists from many countries, so the data and the computers to process it are distributed across the world. There is a strong contingent from the UK Step-1: Data Specification and Location Grid Technology & Demonstrator A physicist selects events of a particular type that they are interested in, and uses a web browser to ask which sites have files of this type. Sites are given in order of preference. The physicist may want to use data at their local site and go to remote sites only for files that don’t exist locally or choose to use the least used computational resource. The system handles this prioritisation. Data is distributed across many sites, and it may be duplicated. Processing must be spread across available resources: we use dedicated CPU farms at 9 UK institutes, and also at Lyon, Karlsruhe, Bologna as well as Stanford. The CPU farms and disk arrays do not all ‘belong to BaBar’ but to the universities and research institutes, and are shared Grid Technology is clearly the way forward. It provides a future system in which the user can specify the data they want to analyse and how they want to analyse it, and a job submission system that will locate the relevant files (choosing between alternatives if there are multiple copies) run the analysis jobs on those files using local available CPUs, retrieve the various outputs and combine them seamlessly before returning them to the user. Grid authorisation and authentication tools can enable BaBar collaborators to use facilities on any BaBar site without bureaucratic hindrances Step-2: Job Submission There are two methods available for job submission. The first is Web portal based submission: A physicist makes a few simple choices and finishes by downloading a tar file containing an index of Metadata describing the location of data files. The second is gsub submission: $ gsub -site BABAR-MAN babar-exe tcl-file gsub uses AFS to create the entire input and output sandboxes. All references to necessary libraries, steering and user input files are imported on-demand, output can also be placed with in the well tested framework of AFS. Step-3: Status Monitoring and Alibaba gsub and the submitted jobs communicate with Alibaba via GSI-https. This enables: General Farm statii publishing Individual farm statii to be displayed Specific tailored farm statii to be displayed, (for authenticated viewers) Step-4: Output Retrieval not to mention user documentation and interoperability measurements The histograms produced by each separate job are written to the user’s home directory, again using AFS, where they can be displayed and studied..