1 / 7

RDCCG – Regional Computing and Data Center Germany

management. Grid-Computing Infrastructure & Services (GIS, H.Marten). 1.2. 2.6. FZK tape archive. F Z K b a c k b o n e. software server interactive login ALICE. 1.5.1. 1.6. software installation service. certification authority. compute nodes CPU intensive. Data

maree
Download Presentation

RDCCG – Regional Computing and Data Center Germany

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. management Grid-Computing Infrastructure & Services (GIS, H.Marten) 1.2 2.6 FZK tape archive F Z K b a c k b o n e software server interactive login ALICE 1.5.1 1.6 software installation service certification authority compute nodes CPU intensive Data tape archive PBytes G R I D b a c k b o n e 1.3.1 1.8 1.12 1.1 software server interactive login ATLAS licence service firewall 1.4.1 1.7.1 Data server 1 compute nodes I/O intensive . . . . 8x 1.3.2 infrastructure S A N f a b r i c 1.9 1.13 2.1 ALICE Grid services Data server 2 ATLAS .... 1.7.2 Data Pool Volume Management user support . . . 1.4.2 1.14 compute nodes communication intensive ?? 2.2 Data server n training and education software server interactive login other science system monitoring & management 1.4.3 2.3 1.3.9 1.10 compute nodes TESTING; R&D documentation network address translation 1.7 2.4 Data server R&D software server interactive login TEST; R&D 1.11 1.4.4 R & D batch and background; invisible for users private network ?? 1.3.10 2.5 Data Import Data Export Tier-0/1/2 RDCCG – Regional Computing and Data Center Germany WAN 23-nov-2001

  2. RDCCG Evolution (available capacity) 30% rolling upgrade each year after 2007 Networking Evolution 2002 - 2005 1) RDCCG to CERN/FermiLab/SLAC (Permanent point to point): 1 GBit/s – 10 GBit/s => 2 GBit/s could be arranged on a very short timescale 2) RDCCG to general Internet: 34 MBit/s – 100 MBit/s => Current situation, generally less affordable than 1) FTE Evolution 2002 - 2005 Support: 5 - 30 Development: 8 - 10 New office building to accommodate 130 FTE in 2005

  3. Installation and Update of OSThe idea is to set up a dedicated installation server for each rack of CPUsCurrently evaluating the Rocks-Toolkit MonitoringNo use of dedicated monitoring hardware and/or management busTivoli management suite in broad use for already existing operations-> Further study and deployment of tools necessary Management Tools

  4. Certification Authority • Hardware for CA available • Delivery of certificates tested • CP and CPS draft document available • “FZK-Grid CA, Certificate Policy and Certification Practice Statement” • Organization of German virtual organizations • FZK-Grid CA known to Globus and accepted by DataGrid • Policy for acceptance of foreign certificates needs to be discussed Authorization and Registration Procedure • One account manager for each experiment • WWW form to be filled out, signed by experiment account manager and send to FZK by FAX • One experiment specific account per user • One system administrator account per experiment • No super user privileges, sorry

  5. GSI DarmstadtTier-2 will perform e.g. interactive analysis on subsamples (PROOF) Offer support for countries w/o dedicated Tier-1e.g. Russia (MSU), … Tier-2 business needs to be further discussed and developed within Germany Tier-2 Centers LHC Data Challenges • Commitment to fully take part in all data challenges • MoU to integrate non-LHC hardware upon actual needcurrently exercised with ALICE: use BaBar farm for MC challenge-> Synergetic effect until 2004

  6. Department for Grid-Computing and e-Science (GES, M.Kunze): • Middleware and Application Development • Special interest in ROOT core development and support • Meta-Data catalogue • Definition and implementation of interfaces to Globus and .NET • Evolution of object service • Transaction based processing • Interactive analysis (PROOF) • Special interest in database technology • Data mining techniques (Agents and Neural Networks) • Deployment of intelligent server side procedures • Deployment of XML and SOAP interface technology • Involvement into • BaBar MC production and distributed analysis (GanaTools) • LHC test beds (currently ALICE, ATLAS coming up) • CrossGrid • National Grid projects (e.g. DAS-Grid w/ ZIB, UniDo, PC2, FZ Jülich)

  7. SAN SANergy, qfs, gfs, ... Running SAN for Grid Compute Nodes:I/O clients for nfs, cifs, root, scp, gass, ftp, s-ftp, http, https, ... Grid backbone 128 GBit/s I/O server for nfs, cifs, root, scp, gass, ftp, s-ftp, http, https, ... MDC High Throughput Cluster: Direct SAN access (>300 MB/s) Good for interactive Data-Mining: 1 TB in a few minutes tape robotics disk subsystems

More Related