1 / 16

Experiences of DCache at RAL UK HEP Sysman, 11/11/04

Explore the evolution of DCache at RAL, from origins and current deployments to unanswered questions and the need for improved documentation.

jmunson
Download Presentation

Experiences of DCache at RAL UK HEP Sysman, 11/11/04

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experiences of DCache at RALUK HEP Sysman, 11/11/04 Steve Traylens.traylen@rl.ac.uk

  2. DCache Origins • Developed and maintained by Fermilab and DESY. • Used extensively by Zeus/H1 at DESY. • Designed primarily to sit in front of a tape store.

  3. Why DCache • Gives you a virtual filesystem across many file systems optionally on several nodes. • Allows replication within filesystem to increase redundancy. • Allows a tape system to be interfaced at the back to increase redundancy further. • Data protocols are scalable, one GridFTP interface per server is easy and transparent.

  4. DCache Doors • Doors can be created into the system. • GridFTP • SRM • GSIDCAP • A POSIX interface is available to this. • All of these are GSI enabled but Kerberos doors also exist. • Everything remains consistent regardless of the door that is used. • A big advantage over the edg-se.

  5. History of DCache at RAL • Mid 2003 • We deployed a non grid version for CMS. It was never used in production. • End of 2003/Start of 2004 • RAL offered to package a production quality DCache. • Stalled due to bugs and went back to developers and LCG developers.

  6. DCache today at RAL • September 2004 • Redeployed DCache into LCG system for CMS and DTeam VOs. • In production, yet to be proved. • Deployed within JRA1 testing infrastructure for gLite i/o daemon testing.

  7. Current Deployment at RAL

  8. Transfer into DCache

  9. Transfer into DCache

  10. Upcoming Deployments • Both LHCb and Atlas want a DCache setup. • This is happening now. • LHCb also wants the system backed to the ADS. • At this time this has not been investigated but will be by the tier1 or the storage group.

  11. Extreme Deployment.

  12. Current Installation Technique • The tier1 now has its own cookbook to follow but is not generic at this time. • Prerequisites • VDT for certificate infrastructure. • edg-mkgridmap for grid-mapfile. • J2RE. • Host certificate for all nodes with a GSI door.

  13. RPM Packages • Head NODE • d-cache-client, d-cache-lcg, d-cache-core, d-cache-opt and pnfs • Pool NODE • d-cache-lcg and d-cache-core.

  14. Configuration • Change a few files and run some scripts with arguments. • The scripts do a lot so fixing any problems at this stage is difficult and has required expert help.

  15. Comments • Documentation is severely lacking for the operation of DCache. • E.g There is admin interface accessible via ssh where pools can be marked read only, possibly drained,…. • Trial and error is the standard method. • With the LCG release a admin guide is promised.

  16. Unanswered Questions • How do we drain a node for maintenance? • CHEP papers suggest this is possible. • How do we use one dCache for multiple VOs? • Is there a concept of defining quotas for groups? • Can we tie one group to specific nodes.

More Related