190 likes | 210 Views
SRB @ CC-IN2P3. Jean-Yves Nief, CC-IN2P3. KEK-CCIN2P3 meeting on Grids. September 11th – 12th, 2006. Overview. 3 SRB servers: 1 Sun V440, 1 Sun V480 (Ultra Sparc III), 1 Sun v20z (AMD Opteron). OS: Solaris 9 and 10. Total disk space: ~ 8 TB HPSS driver (non DCE): 2003. Using HPSS 5.1.
E N D
SRB @ CC-IN2P3 Jean-Yves Nief, CC-IN2P3 KEK-CCIN2P3 meeting on Grids. September 11th – 12th, 2006
Overview. • 3 SRB servers: • 1 Sun V440, 1 Sun V480 (Ultra Sparc III), 1 Sun v20z (AMD Opteron). • OS: Solaris 9 and 10. • Total disk space: ~ 8 TB • HPSS driver (non DCE): 2003. Using HPSS 5.1. • MCAT: • Oracle 10g. • Environment with multiple OS for clients or other SRB servers: • Linux: RedHat, Scientific Linux, Debian. • Solaris. • Windows. • Mac OS. • Interfaces: • Scommands invoked from the shell (script based on them). • Java APIs. • Perl APIs. • Web interface mySRB. KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
Who is using SRB @ CC-IN2P3 ? In green = pre-production. • High Energy Physics: • BaBar (SLAC, Stanford). • CMOS (International Linear Collider R&D). • Calice (International Linear Collider R&D). • Astroparticle: • Edelweiss (Modane, France). • Pierre Auger Observatory (Argentina). • Astrophysics: • SuperNovae Factory (Hawaii). • Biomedical applications: • Neuroscience research. • Mammography project. • Cardiology research. KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
Babar, SLAC & CC-IN2P3. • BaBar: High Energy Physics experiment closed to Stanford (California). • SLAC and CC-IN2P3 first opened to the BaBar collaborators data analysis. • Both held complete copies of data (Objectivity). • Now only SLAC hold a complete copy of the data. • Natural candidates for testing and deployment of grid middleware. • Data should be available in a delay of 24/48 hours. • SRB: chosen for data distribution of hundreds of TBs of data. KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
(2) (1) (3) HPSS/Lyon HPSS/SLAC SRB MCAT SRB MCAT SRB SRB SRB SRB BaBar architecture. 2 Zones (SLAC + Lyon) SRB CC-IN2P3 (Lyon) SLAC (Stanford, CA) KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
Extra details (BaBar). • Hardware: • SUN servers (Solaris 5.8, 5.9, 5.10): NetraT V240, V440, V480, V20z. • Software: • Oracle 10g for the SLAC and Lyon MCAT. • MCATs synchronization: only users and physical resources. • Comparison of the MCATs contents to transfer the data. • Step (1), (2), (3) multithreaded under client control: very little latency. • Advantage: • External client can pick up data from SLAC or Lyon without interacting with the other site. KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
Overall assessment for BaBar. • A lot of time saved for developping applications thanks to the SRB. • Transparent access to data: • Very useful in an hybrid environment (disk, tape). • Easy to scale the service (adding/removing new servers on the fly). • Not dependent of physical locations changes in the client application. • Fully automated procedure on both sides. • Easy for SLAC to recover corrupted data. • 300 TB (530,000 files) shipped to Lyon. • Up to 3 TB /day from tape to tape (minimum latency). • Going up to 5 TB / day now. KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
ESNET Traffic with one server on both sides (April 2004). SLAC (US) IN2P3 (FR) 1 Terabyte/day Fermilab (US) CERN SLAC (US) INFN Padva (IT) Fermilab (US) U. Chicago (US) U. Toronto (CA) Fermilab (US) Helmholtz-Karlsruhe (DE) SLAC (US) CEBAF (US) IN2P3 (FR) INFN Padva (IT) SLAC (US) Fermilab (US) JANET (UK) SLAC (US) JANET (UK) DOE Lab DOE Lab Argonne (US) Level3 (US) DOE Lab DOE Lab Fermilab (US) INFN Padva (IT) Argonne SURFnet (NL) IN2P3 (FR) SLAC (US) KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
HPSS/Lyon User PC CC-IN2P3 SRB SRB SRB CMOS, Calice: ILC. CMOS: 5 to 10 TBs / year HPSS/Lyon IReS (Strasbourg) (2 TB) CC-IN2P3 Calice: 2 to 5 TBs / year KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
HPSS/NERSC Berkeley (project) HPSS/Lyon CC-IN2P3 SRB SRB SuperNovae Factory. • Telescope data stored into the SRB, processed in Lyon (almost online). • Collaborative tool + backup (files exchanged between French and US users). SRB needed for the « online »! Hawaii telescope a few GBs / day
IRM Siemens MAGNETOM Sonata Maestro Class 1.5 T Acquisition Consol Siemens Celsius Xeon (Window NT) DICOM DICOM DICOM DICOM DICOM DICOM DICOM DICOM SRB Export PC Dell PowerEdge 800 Neuroscience research. • FTP, • File sharing, • … KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
Neuroscience research (II). • Goal: make SRB invisible to the end user. • More than 500,000 files registered. • Data pushed from Lyon, Strasbourg hospital: • Automated procedure including anonymization. • Now interfaced within the MATLAB environment. • ~ 1.5 FTE for 6 months… • Next step: • Ever growing community (a few TBs / year). • Goal: • Join the BIRN network (US biomedical network). KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
Mammography. • Database of X ray pictures (Florida) stored into SRB: • Reference pictures of various type of breast cancers. • Analyze a X ray picture of a breast: • Submitting a job in EGEE framework. • Compare with the ones in the reference database: • Pick up from the SRB. KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
(Lyon Hospital) CC-IN2P3 PACS SRB SRB Cardiology. • PACS (hospital internal info system): invisible from the outside world. • Being interfaced with the SRB at the hospital using SRB/DICOM driver (thanks to CR4I, Italy!). • PACS data published in the SRB anonymized on the fly. • Possibility to exchange data in a secure way. Deployed but needs more testing. KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
GGF Data Grid Interoperability Demonstration. • Goals: • Demonstrate federation of 14 SRB data grids (shared name spaces). • Demonstrate authentication, authorization, shared collections, remote data access. • CC-IN2P3 part of it. • Organizers: Erwin Laure (Erwin.Laure@cern.ch) Reagan Moore (moore@sdsc.edu) Arun Jagatheesan (arun@sdsc.edu) Sheau-Yen Chen (sheauc@sdsc.edu) KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
GGF Data Grid Interoperability Demonstration (II). • A few tests with KEK, RAL, IB (UK + New Zealand). KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
Summary. • Lightweight administration for the entire system. • Fully automated monitoring of the system health. • For each project: • Training of the administrator(s) of the project. • Proposing the architecture. • User support and « consulting » on SRB. • Different project = different needs, various aspects of SRB used. • Over 1 million of files for some catalogs very soon. • More projects coming to SRB: • Auger: CC-IN2P3 Tier 0, import from Argentina, real data and simulation distribution. • 1 MegaStar project (Eros, astro): usage of HDF5 driver ? • BioEmergence. KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
What’s next ? • Monitoring tools of the SRB systems for the users needed: (like Adil, Roger Downing did for CCLRC). • Build with Adil some kind of European forum on SRB: • Already contacts in Italy, Netherlands, Germany. • Gather everybody experience on SRB. • Put in common tools, scripts developped. • Adil will host the first meeting in the UK. • Big party in his new appartment: everybody welcome! • SRB-DSI. KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006
Involvement in iRODS. • Many possibilities, some examples: • Interface with MSS: • HPSS driver. • Improvement of the compound resources (rules for migration, etc…). • Mixing compound and logical resources. • Containers ( see Adil @ CCLRC). • Optimization of the transfer protocol on long distance network wrt SRB (?). • Databases performance (RCAT, DAI). • Improvement of data encryption services. • Web interface (php ?). KEK-CCIN2P3 meeting on Grids, September 11th-12th 2006