110 likes | 238 Views
WP5 Status. John Gordon Budapest September 2002. Summary. Status of TB1.2 Near future Status of TB2 Future developments Issues. Status of TB1.2. GDMP staging to MSS - CERN, Lyon, RAL, (NIKHEF/SARA) manual staging to MSS - CERN, Lyon, RAL, (NIKHEF/SARA)
E N D
WP5 Status John Gordon Budapest September 2002
Summary • Status of TB1.2 • Near future • Status of TB2 • Future developments • Issues
Status of TB1.2 • GDMP staging to MSS - CERN, Lyon, RAL, (NIKHEF/SARA) • manual staging to MSS - CERN, Lyon, RAL, (NIKHEF/SARA) • local direct MSS access – CERN, Lyon, (RAL, NIKHEF/SARA) • Grid authorisation for local direct MSS access – CERN, Lyon • Command line interface to SE metadata • User can interrogate total free space on SE, not per VO • RB cannot schedule to CE near SE with a certain amount of free space
Near future (ie pre TB2.0) • GLUE • SE schemas drafted based on existing EDG one • Chance to add improvements • Need more discussion with other m/w WPs • SE auto staging • So user doesn’t have to manually stage new files in GDMP • SE disk space management • Housekeeping to keep disk space available on TB1.2 SE • GetSECosts • File access time estimates for Optor (WP2) • TB1.3 exists. To be integrated with Optor.
Summary of SE service • Release 1.0 (demo end Sept, quality end Oct) • Control Interface to handle registration, pinning, reservation(*), staging MSS->disk, ACLs • Implemented on disk and at least Castor • Release 2.0 (end Nov) • More MSS support - Lyon, RAL, SARA, (WP9, WP10) • Tarballs – allows writing large number of small files to MSS in efficient manner • Disk Cache management • Direct GridFTP access to Castor (omit SE disk cache) • Release 3.0 (end Dec) • GridFTP done through SE • Supporting arbitrary MSS • ACLs applied to transfers too • Date less certain – still investigating several server solutions • RFIO through SE • Posix I/O through SE • should remove need for NFS with subsequent increased robustness, efficiency and allow true space reservation
SE Release 1.0 1. SE ‘service manager’ will provide basic subset of SRM functionality • Get and Put, possibly Copy • For MSS, Get results in file being staged to disk 2. Hierarchical directory structure possible even if MSS is flat 3. SE filenames can be longer than underlying MSS 4. Remote access to staged files using GridFTP • Thus provides GridFTP access to MSS (in Castor at least) 5. ACL control of access through SE using GACL Combination of 1 and 4 above used by RLS and/or users
SE Example in Release 1.0 • User has LFN existing in SE • User pins file • SE (brings it on to disk and) returns URL where it can be found • User uses GridFTP to transfer file (to any site) • OR user accesses file directly (eg by NFS from local job only)
Migration Plan • Timing Driven by WP2 • SE 1.0 will allow ReplicaLocationService to transfer files between SEs including MSS (at least Castor) • Register existing files in SE, PFN in RC/RLS unchanged • GDMP can stop staging files to MSS • SE has choice of staying as disk-only SE or becoming cache for MSS • Can then move SE files to MSS and register MSS files in RLS
Future developments • RG-GMA information provider • Quotas • OGSA • Further SRM functionality • slashGrid
Questions/Problems (1) • Space reservation • API for reservation exists… • …but cannot be honoured unless all access is via SE • this requires Posix I/O in future. • SE-only access may never be acceptable to local site • Current solution is best-efforts reservation – SE will not allow reservation if space does not exist. Could return ‘quality of reservation’ (eg fraction of freespace reserved)
Questions/Problems (2) • Authorisation • Is VOMS sufficient? • SE can use LCAS functions in similar way to EDG gatekeeper • We allow ACLs and can set up a standard set for standard roles but who applies them and how are they managed externally? • Replicas seem clear enough but not other files • How does our model of different software at different sites fit with the EDG Software Release Model • How do we handle external packages in the new regime?