80 likes | 290 Views
Xrootd @ CC-IN2P3. Jean-Yves Nief, CC-IN2P3. HEPiX, SLAC October 11th – 13th, 2005. Overview. CC-IN2P3: Tier A for BaBar since 2001. Xrootd deployed primarily for BaBar (2003). Smooth transition from the Objectivity architecture: The 2 systems are running on the same servers.
E N D
Xrootd @ CC-IN2P3 Jean-Yves Nief, CC-IN2P3 HEPiX, SLAC October 11th – 13th, 2005
Overview. • CC-IN2P3: Tier A for BaBar since 2001. • Xrootd deployed primarily for BaBar (2003). • Smooth transition from the Objectivity architecture: • The 2 systems are running on the same servers. • Hybrid storage (disks + tapes): • Tapes: master copy of the files. • Disks: temporary cache. • Interfaced with the Mass Storage System (HPSS) using RFIO in Lyon. HEPIX conference, SLAC, October 11th-13th 2005
(5) (4) (3) (6) (1) (2) Lyon architecture. (4) + (5): dynamic staging T1.root HPSS 20 data servers – 70 TB of disk cache 140 To: ROOT 180 To: Objy Slave server: Xrootd / Objy Slave server: Xrootd / Objy (6): random access Master server: Xrootd / Objy 2 servers (etc…) Client (1) + (2): load balancing T1.root ?
Xrootd for other experiments. • Master copy of the data in HPSS for most of the experiments. • Transparent access to these data. • Automatic management of the cache resource. • Used on a daily basis within the ROOT framework by (up to 1.5 TB of disk cache used): • D0 (HEP). • AMS (astroparticle). • INDRA (nuclear physics). HEPIX conference, SLAC, October 11th-13th 2005
Assessment … • Very happy with Xrootd ! • Fits really well our needs. • Random access between the client and data server. • Sequential access between MSS and servers. • Lots of freedom in the configuration of the service. • Administration of servers very easy (fault tolerance). • No maintenance to do even under heavy usage (more than 600 clients in //). • Scalability: very good prospects. HEPIX conference, SLAC, October 11th-13th 2005
… and outlook. • Going to deploy it for Alice and also CMS (A. Trunov): • Xrootd / SRM interface. • Usage outside the ROOT framework: • I/O for some projects (e.g.: astrophysics) can be very stressfull compared to regular HEP applications. • Needs transparent handling of the MSS. • Using Xrootd POSIX client APIs for reading and writing. HEPIX conference, SLAC, October 11th-13th 2005
offset « time » Xrootd vs dCache. I/O profile for Orca client • Doing comparison tests between the 2 protocols: • I/Os taken out from a CMS application (Orca). • Pure I/Os (random access). • Stress test using up to 100 clients accessing 100 files. • Sorry! Preliminary results cannot be revealed… • To be continued… STRONGLY ENCOURAGING PEOPLE TO DO SOME TESTING !
Issues for the LHC era. • Prospects for CC-IN2P3: • 4 Pbytes of disk space foreseen in 2008. • Hundreds of disk servers needed ! • Thousands of clients. • Issues: • Choice of the protocol not innocent (€, $, £, CHF). • Need to be able to cluster hundreds of servers. • Point 2 is a key issue and has to be addressed !! • Xrootd is able to answer it. HEPIX conference, SLAC, October 11th-13th 2005