1 / 8

pps-dCache @ gridKa

Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Doris Ressmann Silke Halstenberg Jos van Wezel http://gridka.de. pps-dCache @ gridKa. pps-dCache 1.8 headnodes. pps-srm-fzk.gridka.de 2x2.8GHz 4GB mem 134GB disk SL4

lucy-price
Download Presentation

pps-dCache @ gridKa

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Doris Ressmann Silke Halstenberg Jos van Wezel http://gridka.de pps-dCache @ gridKa

  2. pps-dCache 1.8 headnodes pps-srm-fzk.gridka.de 2x2.8GHz 4GB mem 134GB disk SL4 kernel: 2.6.9-42 / 64 bit srmDomain InfoProvider SRM postgres DB pps-dcacheh1 2x2.8GHz 4 GB mem 134GB disk SL4 kernel:2.6.9-42 / 64 bit dCap, dirDomain utilityDomain, lmDomain dCacheDomain admin-door, gPlazma httpdDomain, StatisticDomain xrootdDomain pnfsServer, pnfs Manager Pnfs und companion DB pps-se-fzk.gridka.de Lcg info system (GIIS) gLite-SE

  3. Storage Pools 3,2 GHz / 2 GB mem / 2.6.9-34 kernel 64 bit SL4 fileserver with 5.2 TB for pools 1 pool 1.7 TB tape connected for all VOs 1 pool 1.7 TB for tape tests (only used by gridka staff) 1 pool 1.7 TB disk-only fileserver with 5.2 TB for pools 1 pools 2.6 TB tape connected (did not come up after update) 1 pool 2.6 TB disk-only

  4. VO support ops, dteam, alice, atlas, cms, lhcb, no dedicated pools for a VO 3 link groups (Test-link, All-tape-link, All-disk-only-link) psu create link all-tape-link any-protocol all-store world-net psu set link all-tape-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1 psu add link all-tape-link tape-pools psu set linkGroup attribute all-tape-link-group HSM=TSM psu addto linkGroup all-tape-link-group all-tape-link Is this enough for all SRM 2.2 cases e.g. Custodial, Near line,…Space Manager configuration? all-store covers SRM1.1 directory tags!!!

  5. To do gPlazma authentication VOMS different roles e.g for space reservation more tests of the new tape connection get ATLAS tests going check LHCb requirements…..

  6. Info Provider In dCache 1.7 Pool Manager psu create pgroup ops psu addto pgroup ops f01-015-128-e_ops GRIIS and dCache SRM on same host infoProviderStaticFile=/opt/lcg/var/gip/ldif/static-file-SE.ldif link to /opt/d-cache/jobs/infoDynamicSE-provider-dcache Same for plugin In dCache 1.8 a pool belongs to one and only one pgroup !!!!

  7. open issues • accounting / info system: • separation of SE and SRM • Is it still possible to have pools shared by different VOs in 1.8? • Flavia's DN (Emailaddress=....) • without gPlazma its ok, it is used as (E=...) • with gPlazma the requests are with (Emailaddress=....)

  8. Questions?

More Related