170 likes | 458 Views
Integration includes the interaction between the replica manager and the RLI ... Integrate with WP2 Data Replication Services (Reptor) Jobs running on ...
E N D
Slide 1:Data Management
Paul Millar GridPP 8th Collaboration meeting 22-23 September 2003
Slide 2:Outline
Data management Contribution to TB2.0 Contribution to TB2.1 The next six months The future EGEE and beyond Data management, Storage Releases Collaboration Deployment Installation Limitations Development Priorities, The Future
Slide 3:Contribution to TB2.0
Java based services (using either Tomcat or Oracle 9iAS) Webservices communications infrastructure. Clients for many languages auto-generated from WSDL (Java, C/C++, ...) Persistent service data stored in MySQL or Oracle DB. Replication service framework (RM, RLS, RMC, ROS) Java security package (GSI compliment/replacement)
Slide 4:Replication Service Framework
User
Slide 5:TestBed-2.1
EDG RLI is currently being integrated (yes, right this second!) Integration includes the interaction between the replica manager and the RLI The new security framework will also be integrated. We can deploy secure services, but C++ clients are still under development.
Slide 6:The next six months...
Support 2.0 release for production, including bug fixes. Support Oracle-based LRC for LCG Bugfix the 2.1 release (but at a lower priority). Run performance and scalability tests. Do our documentation deliverables Demo data management of 2.1 release
Slide 7:EGEE and beyond
Interface with CERN EGEE cluster for data management New projects Catalogue proxies Replica Management Service File prefetch Local data management More advanced optimisation Need clearly defined procedure for handing over code to service people. Work with Globus to integrate the two RLS systems.
Slide 8:Storage Element Status
GridPP8 Bristol, 22-23 September 2003
Slide 9:Releases
Release for Testbed 2.0 Web service in secure or insecure mode (or both!) Access control being integrated Disk, CASTOR, HPSS, Atlas Datastore GridFTP, NFS, RFIO Release for Testbed 2.1 2.0 plus: SRM version 1.1 interface Proper queuing system Compiled with gcc 3.2.2
Slide 10:Collaborations
DataGrid Storage Element Integrate with WP2 Data Replication Services (Reptor) Jobs running on worker nodes in a ComputingElement cluster may read or write files to an SE SRM Storage Resource Manager Collaboration between Lawrence Berkeley, FermiLab, Jefferson Lab, CERN, Rutherford Appleton Lab
Slide 11:Deployment
Working Storage Elements: CERN Castor and disk UAB Barcelona Castor RAL Atlas DataStore and Disk ESA/ESRIN disk CC-IN2P3 HPSS Testing: ESA/ESRIN plan tape MSS (AMS) NIKHEF disk Others: SARA building from source on SGI INSA/WP10 to build support for DICOM servers
Slide 12:Installation
RPMs and source available Source compiles with gcc 2.95.x and 3.2.2 Configures using LCFG-ng Tools available to build and install SEs without LCFG: ./configure --prefix=/opt/edg make make install ./edg-se-configure-all mss-type=disk
Slide 13:Limitations
Disk Cache Not much protection On disk-only SEs, files are copied into disk cache No proper disk cache management yet (pinning) Users can only be members in one VO Once VOMS is supported this limitation will go away VOs not properly supported for insecure (anonymous) access ACLs fixed can only be modified by SE admin
Slide 14:Development Priorities
EDG TrustManager adopted for web services authentication done Proper queuing system done Delete, exists (not part of SRM) - done SRM v.1.1 interface being integrated Access control use GACL being integrated Improved disk cache management (including pinning) not yet done VOMS support not yet done
Slide 15:Future Directions post EDG
Guaranteed reservations SRM2 recommendations: volatile, durable, permanent files and space Full SRM version 2.1 Scalability Scalability will be achieved by making a single SE distributed Not hard to do