210 likes | 218 Views
This conference presentation provides an overview of the status of GRID initiatives in the Nordic region, including updates on the SweGrid testbed for production, NorduGrid, and the Nordic DataGrid Facility. It also discusses the challenges and potential problems for collaborative GRID projects in the Nordic countries.
E N D
GRID-initiatives in the Nordic regionstatus and future challenges Conference xxx - August 2003 Anders Ynnerman EGEE Nordic Federation Director Swedish National Infrastructure for Computing Linköping University Sweden
Outline of presentation • Status in the Nordic countries • Denmark, Finland, Norway, Sweden • SweGrid testbed for production • NorduGrid • Nordic DataGrid Facility • Nordic Grid Consortium • North European Grid • Identifying potential problems for Nordic Grid collaborations Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 2
HPC program initiated 2001 Danish Center for Scientific Computing (DCSC) Distributed program Location of facilities based on scientific evaluation of participating groups not on computer centers or infrastructure Århus Odense DTU Copenhagen University Budget 2.1 MEuros (2001) 1.6 MEuros (2002) 1.5 MEuros (2003) 1.4 MEuros (2004) Copenhagen University is a partner in NorduGrid Danish Center for Grid Computing (DCGC) has just been established Virtual center located a 5 sites Main location at the Niels Bohr Institute, Copenhagen University 0.75 MEuros/year Operations and support of a Danish Grid Research agenda with 10 senior researchers and 5 PhD students Strong ties to NorduGrid and Nordic DataGrid Faicility Participation in EGEE through NBI HPC-infrastructure GRID-initiatives DENMARK Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 3
NOTUR – The Norwegian High-Performance Computational Infrastructure Trondheim Oslo Bergen Tromsö Budget 168 MEuros (2000-2003) 50% from reserach councils Statoil, DNMI In kind contributions New program similar to the Swedish program is being initated and placed under the NREN (UNINETT) Metacenter goal – To establish a uniform and as seamless as possible, access to HPC resources Parallab at Bergen is participating in the Nordic Grid Consortium with PDC and CSC Parallab is contributing to EGEE security team Bergen and Oslo are partners in NorduGrid Plans for a ”NORGRID” are being developed HPC-infrastructure GRID-initiatives NORWAY Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 4
Center for Scientific Computing (CSC) is the (main) center in Finland The largest HPC center in Scandinavia Runs services for academic users all over Finland and FMI Budget Directly from the ministry of education Smaller university HPC-systems (clusters) are beginning to appear CSC is one of the founding partners in the Nordic Grid Consortium CSC is contributing resources to NorduGrid CSC is a partner in DEISA Helsinki Institute of Physics is running several GRID projects and is contributing to the EGEE security team A Finnish Grid dedicated to Condensed Matter Physics is being built HPC-infrastructure GRID-initiatives FINLAND Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 5
Swedish National Infrastucutre for Computing (SNIC) formed during 2003 6 participating centers Umeå Uppsala Stockholm Linköping Göteborg Lund National resource allocation (SNAC) Budget 4.5 MEuros/year (Government) 4.0 MEuros/year private foundations Lund and Uppsala are partners in NorduGrid Largest resource contributions to NorduGrid are SNIC clusters at Umeå and Linköping SweGrid production Grid is being built (600 CPUs) over 6 sites 120 Tb storage SNIC is hosting the EGEE Nordic Regional Operations Center Stockholm (PDC) is coordinating the EGEE security activity PDC is one of the founders of the Nordic Grid Consortium PDC is a partner in the European Grid Support Center HPC-infrastructure GRID-initiatives SWEDEN Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 6
SweGrid • 3 parts • Research and development • Operations • Hardware • Total budget 3.6 MEuro • 6 GRID nodes: • 600 CPUs • IA-32, 1 processor/server • 875P with 800 MHz FSB and dual memory busses • 2.8 GHz Intel P4 • 2 Gbyte • Gigabit Ethernet • 12 TByte temporary storage • FibreChannel for bandwidth • 14 x 146 GByte 10000 rpm • 1 Gigabit direct connection to SUNET (10 Gigabit) Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 7
Persistent storage on SweGrid? 2 1 3 Bandwidth Availability Size Administration Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 8
The NorduGrid project • Initiated by several Nordic universities • Copenhagen, Lund, Stockholm, Oslo, Bergen, Helsinki, later: Linköping and Umeå • Started in January 2001 & funded by NorduNet-2 • Simultaneously with EU DataGrid, but with only a 2 years budget and 3 new positions • Initial goal: to deploy DataGrid middleware to run “ATLAS Data Challenge” • Cooperation with DataGrid • Common Certification Authority and Virtual Organization tools, Globus2 configuration • Common applications (high-energy physics research) • Switched from deployment to R&D in February 2002 • Deployed a light-weight and yet reliable and robust Grid solution in time for the ATLAS DC tests in May 2002 • Continuation • NorduGrid was one of the driving forces behind the creation of the ”North European Grid Federation” together with the Dutch Grid, Belgium and Estonia • Is an option for middleware for the ”Nordic Data Grid Facility” • …as well as for the Swedish Grid facility SWEGRID Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 9
The resources • Currently available resources: • 4 dedicated test clusters (3-4 CPUs) • Some junkyard-class second-hand clusters (4 to 80 CPUs) • Few university production-class facilities (20 to 60 CPUs) • Two world-class clusters in Sweden, listed in Top500 (230 and 440 CPUs) • Other resources come and go • Canada, Japan – test set-ups • CERN, Russia – clients • It’s open, anybody can join or part • People: • the “core” team grew to 7 persons • local sysadmins are only called up when users need an upgrade Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 10
A NorduGrid snapshot Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 11
Reflections on NorduGrid • Bottom up project driven by an application motivated group of talented people • Middleware adaptation and development has followed a flexible and minimally invasive approach • HPC centers are currently “connecting” large resources since it is good PR for the centers • As soon as NorduGrid usage of these resources increases they will be disconnected. There is no such thing as free cycles! • Motivation of resource allocations is missing • NorduGrid lacks an approved procedure for resource allocation to VOs and individual user groups based on scientific reviews of proposals • Funding for the continuation of the project is today unclear • NorduGrid may remain a middleware project that delivers solutions to other Nordic production Grids such as SweGrid Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 12
Nordic DataGrid Facility • Based on a Working paper by the NOS-N working group (April 4, 2002): road map towards the Nordic Grid Facility • Project started 2003-01-01 • Builds on interest from • Biomedical sciences • Earth sicences • Space and astro sciences • High energy physics • Gradual build-up • Project leader and 4 postdocs • Projects in middleware and application interfaces Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 13
Nordic DataGrid Facility Capacity building 2004-2007: Scope 15 MEuro. Prototype period 2002-2004: Scope 0.54 MEuro. Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 14
NDGF challenges Users Country 4 Users Country 3 Users Country 2 Users Country 1 Current HPC setup Funding agency country 1 HPC-center HPC-center HPC-center HPC-center Funding agency country 2 Funding agency country 3 Funding agency country 4 Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 15
HPC-center HPC-center HPC-center HPC-center Funding agency Country 1 Funding agency Country 1 Funding agency Country 1 Funding agency Country 1 NDGF? Users VO 3 Proposals VO 1 GRID management Other GRIDs VO 2 Accounting Allocations Middleware MoU SLAs Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 16
Nordic Grid Consortium • PDC, CSC and Parallab have signed Memorandum of Understanding. • The Vision is to: • “create a Grid for the benefit of the users of the facilities” • Purpose is to: • Explore Grid production from an HPC-center point of view • Explore means of exchanging resource allocations across national borders • NGC is an inclusive organisation and will expand to include all major HPC centers in Scandinavia Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 17
Conclusions • A large number of GRID initiatives in the Nordic Region • HPC – centers and GRID initatives are rapidly merging • Strong need for a mechanism for exchange of resources over the borders • Very few existing users belong to VOs • Most cycles at HPC-centers are used for chemistry • How will these users become GRID users? • There is a need for authorities that grant resources to projects (VOs) • Locally • Regionally • Nationally • Internationally Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 18
North European Grid • NEG is an EGEE federation consisting of • Belgium, Denmark, Estonia, Finland, Holland, Norway, Sweden • NEG is responsible for the security in EGEE • NEG is hosting an EGEE Regional Operations Center • Two lead sites • SARA • SNIC/PDC • NEG executive committee has been formed • One representative per partner in the federation plus also a national representative per country • Phone meetings every second Friday to deal with common issues • NEG will play an important role in the coordination of Grid efforts in the federation Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 19
The NorduGrid philosophy • No single point of failure • Resource owners have full control of the contributed resources • Installation details should not be dictated • Method, OS version, configuration, etc. • As little restriction on site configuration as possible • Computing nodes should not be required to be on the public network • Clusters need not be dedicated • NorduGrid software should be able to use existing system and Globus installation • Patched Globus RPMs are provided • Start with something simple that works and proceed from there Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 21
Nordic Atlas User Group NorduGrid MoU SweGrid MoU Nordic PigMapMembers DCGC MoU NDGF Notur MoU CSC MoU . . . Dalton Users PigMap Atlas Workshop on eInfrastructure (Internet and Grids) Rome Dec-03 - 22