E N D
Shipping High-Energy Physics data, be it simulated or measured, required strong national and trans-Atlantic networking. Within the Netherlands SURFnet, the Adacemic and Research Internet Service Provider, boasts a 10 Gbit/s national backbone. SURFnet interconnects with Géant at 2.5 Gbit/s, and run several links to New York and Chicago’s STARtap. SURFnet also strongly supports network research. In their effort for all-optical net-works, they provide a 2.5 Gbit/s link or lambda to STARLight (Chicago). This link brings us close to major US Laboratories like FermiLab and is an important piece in the research of the EU DataTAG project. It also provides excellent facilities for the experiments running at FNAL. Grid Computing at NIKHEF NIKHEF collaborates with various universities and research labs, both in the Netherlands and abroad. Since 1991 we have operated Condor Clusters. Grid research was started in 1999 as part of the WTCW Virtual Laboratory Project, in collaboration with the University of Amsterdam, SARA and AMOLF. Current projects include LHC Computing Grid, DataGrid, DataTAG, CrossGrid and Virtual Lab for E-Science (VL-E). DutchGrid is the platform for Large-Scale Distributed Computing in the Netherlands. It include KNMI, UvA, Free University, SARA, CWI, AMOLF, CMBI, LUMC, leiden, TU Delft, UU, and others. Data shipped over our link to www.nikhef.nl/grid www.dutchgrid.nl Rev. 20020719-2, David Groep, NIKHEF, <grid.support@nikhef.nl>
More than a decade ago, NIKHEF was one of the first five sites in the world to be on the Web. Developed by Tim Berners-Lee at CERN, Geneva, the web was meant to ease communications within the large High-Energy Physics collaboration emerging at that time. Nowadays, NIKHEF is one of the five first DataGrid centres. Were the web was limited to sharing information, the Grid goes further: sharing computational resources, data storage and large databases without bothering the users with the how and when. Once plugged-in to the Grid, all resources in the world are instantly available to the user. The Grid is widely regarded as the most viable solution to the LHC Computing Challenge. When the Large Hadron Collider (LHC) becomes operational in five years, a unprecedented amount of data not only has to be stored, but also analysed. The estimated amount of data is a few petabyte (1015 bytes) every year. We estimate that around 50 000 computers will be needed to process these data. No single computer centre in the world will be able to handle the amount of data produced by LHC. Therefore, the EU DataGrid project has embarked on a research and development project to solve the computational and storage problems on the Grid. As of March 2002, a functional test bed for Grid computing has been deployed around Europe. This test bed was extremely successful for the first Review of the DataGrid project by the EU Commissioners. The current application test bed spans five institutions: CERN (Geneva), NIKHEF (Amsterdam) , RAL (UK), IN2P3 (Lyon) and CNAF (Bologna). For this test bed, NIKHEF offered a full suite of services: data storage, gatekeepers to accept jobs, a user interface machine for our users and a network monitoring system. The back-end farm consists of 22 worker CPUs, with more machines on order. In total NIKHEF can avail over more than 500 CPUs and 200 TByte of permanent storage. The Grid Application Cluster in use for DØ Monte-Carlo production and for “external” use by the EU DataGrid (EDG) test bed. This farm is fully installed using EDG fabric management tools: adding a brand-new system is as easy as pressing the power button. A new system, complete with all application software, is up and running in 15 minutes. EU DataGrid sites participating in Test Bed 1