50 likes | 171 Views
LHCb & GRID in the Netherlands. NIKHEF/VU Amsterdam Kors Bos Jo van den Brand Henk Jan Bulten David Groep Sander Klous Jeff Templon NIKHEF/VU ICT groups. Current Status (hardware). 2*20 CPUs for LHCb testbed 1 dual pentium III 933 MHz 10 nodes at NIKHEF, 10 at VU
E N D
LHCb & GRID in the Netherlands • NIKHEF/VU Amsterdam • Kors Bos • Jo van den Brand • Henk Jan Bulten • David Groep • Sander Klous • Jeff Templon • NIKHEF/VU ICT groups
Current Status (hardware) • 2*20 CPUs for LHCb testbed 1 • dual pentium III 933 MHz • 10 nodes at NIKHEF, 10 at VU • 1 Gb ram, 45 Gb IDE disk /node • 2 dell servers, 90Gb IDE disk, fast ethernet switches • Currently the nodes are being assembled at NIKHEF, the switches are ordered • Network • SURFnet NIKHEF-VU 1 Gbit/s • upgrade to 10Gbit/s (2002) • NIKHEF-Switserland 155Mbit/s • upgrade to 1 Gbit/s (2004) NIKHEF LAN Surfnet LAN VU
Status Software • On linux system we installed and tested • globus toolkit • lhcb toolkit (straight forward) • gaudi • not necessary for MC production • with AFS easy (but lots of redundant stuff is installed) • afs-globus integration is not yet clear due to Kerberos • without AFS (package installation) few weeks of work • unresolved links (NA48, Atlas, ALICE). • Objectivity and HTL-package required • apache web server + tomcat servlet engine • servlets required to start up Monte Carlo jobs and to update CERN-based database • PBS batch system
Current developments • In order to automate massive Monte Carlo data production we developed • integration job submission servlet with PBS • servlet to copy data to CASTOR • automatically every night (cron job) • guaranteed delivery by checking filesize after transfer • generic tool usable from anywhere, in line with grid-ftp • most effort went into creating a robust and reliable environment • Currently we start working on data-quality verification tools • Outlook: clusters should be ready for month-9 and testbed 1 • software environment is well suited for tests in testbed 1 • Data processing both from tape and disk to test efficiency of the Event Data Service • Future: NIKHEF wants to be tier-1 center (together with SARA)
Grid philosophy • Our personal viewpoints: • Monte Carlo data distributed over tier-1s • jobs moved to the data • Minimal grid requirements: • grid authentication and authorization on all tier-1s for job submission • possibility to copy data between tier-1s (via grid tools) • alternatively: servlet strategy (same functionality, but no grid involved).