210 likes | 323 Views
HPSS Features and Futures. Presentation to SCICOMP4 Randy Burris ORNL’s Storage Systems Manager. Table of Contents. Background – design goals and descriptions General information Architecture How it works Infrastructure HPSS 4.3 – current release (as of Sept. 1) HPSS 4.5 HPSS 5.1
E N D
HPSS Features and Futures Presentation to SCICOMP4 Randy Burris ORNL’s Storage Systems Manager
Table of Contents • Background – design goals and descriptions • General information • Architecture • How it works • Infrastructure • HPSS 4.3 – current release (as of Sept. 1) • HPSS 4.5 • HPSS 5.1 • Background • Main features
HPSS is… • File-based storage system – software only. • Extremely scalable, targeting: • Millions of files; • Multiple petabyte capacity; • Gigabyte/second transfer rates; • Single files ranging to terabyte size. • Distributed: • Multiple nodes; • Multiple instances of most servers. • Winner of an R&D 100 award (1997).
HPSS is … • Developed by LLNL, Sandia, LANL, ORNL, NERSC, IBM • Used in >40 very large installations • ASCI (Livermore, Sandia, Los Alamos Labs) • High-energy physics sites (SLAC, Brookhaven, other US sites and sites in Europe and Japan) • NASA • Universities • As anExamples at ORNL • Archiving system ARM • Backup system Backups of servers, O2000 • Active repository Climate, bioinformatics, …
Example of the type of configuration HPSS is designed to support Control HPSS Server(s) Sequential Systems Workstation Cluster or Parallel Systems Control Control Local Devices Visualization Engines Frame buffers HIPPI/ GigE/ATM Network Throuhput Scalable to the GB/s Region Secondary Server(s) Control Parallel RAID Disk Farm HSI NFS FTP DFS LANs Internet To Client Hosts WANs Parallel Tape Farm
HPSS Software Architecture Diagram HPSS Software Architecture Common Infrastructure Communications Security Transaction Manager Metadata Manager Logging Infrastructure Services 64-bit Math Libraries Client(s) Storage System Management - Client API - PFS M a n a g e m e n t (all components) • Applications • Data Management • System Daemons: • HSI • FTP & PFTP • - NFS • - DFS Physical Volume Respositories Physical Volume Library Storage Servers Bitfile Servers Green components are defined in the IEEE Mass Storage Reference Model. Movers Name Servers Other Modules Location Servers Migration/ Purge Repack NSL UniTree Migration Installation
How’s it work? • User stores a file using hsi, ftp, parallel ftp or nfs. • It will be sent to a particular Class of Service (COS) depending upon user selection or defaults. • Default COS specifies a hierarchy with disk at the top level and tape below it. • So, file is first stored on disk (HPSS cache) • When enough time elapses or the cache gets full enough, the file will automatically be copied to the next level - tape - and purged from disk.
HPSS Infrastructure • HPSS depends upon (I.e., is layered over): • Operating system (AIX or Solaris for core servers) • Distributed Computing Environment (DCE) • Security – authentication and authorization • Name service • Remote Procedure Calls • Encina Structured File System – flat-file system used to store metadata such as file names, segment locations, etc. Encina is built upon DCE. • GUI – Sammi product from Kinesix • Distributed File System (DFS) – for some installations. DFS is built upon DCE
HPSS 4.3 (Newest released version) • Support for new hardware • StorageTek 9940 tape drives • IBM Linear Tape Open (LTO) tape drives and robots • Sony GY-8240 tape drives • Redundant Arrays of Independent Tapes • An ASCI PathForward project contracted with StorageTek • Target is multiple tape drives striped with parity
HPSS 4.3 (continued) • Mass configuration • Earlier, each device or server had to be individually configured through the GUI • Could be tedious and error-prone for installations with hundreds of drives or servers • Mass configuration takes advantage of the command line interface (new with HPSS 4.2) • Allows scripted configuration of devices and various types of servers.
HPSS 4.3 (continued) • Support for IBM High Availability configurations • HACMP (High Availability Cluster MultiProcessor) hardware feature • HACMP supporting AIX software • Handles node and network interface failures • Essentially a controlled failover to a spare node • Initiated manually
HPSS 4.3 (continued) • Other features: • Support for Solaris 8 • Client API ported to Redhat Linux • Support for NFS v3 • By the way • In our Probe testbed, we’re running HPSS 4.3 on AIX 5L on our S80 • Not certified, just trying it to see what happens.
HPSS 4.5 – target date 7/1/2002 • Features • Implement an efficient, transparent interface for users to access their HPSS data • Uses HPSS as an archive • Available freely for Linux (no licensing fee) • Key requirements • Support HPSS access via XFS using DMAPI • XFS / HPSS filesystems shall be accessible via NFS for transparent access • Support archived filesets (rename / delete) • Support on Linux
HPSS 4.5 (continued) • Provide migration and purge from XFS based on policy • Stage data from HPSS when data has been purged from XFS • Support whole and partial file migration • Support utilities for the following: • Create / Delete XFS fileset metadata in HPSS • List HPSS filenames in archived fileset • List XFS names of files • Compare archive dumps from HPSS and XFS • Delete all files from HPSS side of XFS fileset • Delete files older than a specified age from HPSS side • Recover files deleted from XFS filesets not yet deleted from HPSS
HPSS 5.1- release date Jan. 2003 • Background • HPSS was designed in 1992/1993 as a total rewrite of NSL UniTree. • Goal – achieve speed using many parallel servers. • The Distributed Computing Environment (DCE) was a prominent and promising infrastructure product • Encina’s Structured File System (SFS) was the only product supporting distributednested transactions. • Management GUI mandated to be Sammi, from Kinesix, because of anticipated reuse of NSL UniTree screens.
HPSS 5.1 Background (continued) • Today: • DCE – future in doubt • Encina’s Structured File System • Future in doubt • Performance problems • No longer need nested transactions • Or distributed transactions • Sammi relatively expensive and feature poor
HPSS 5.1 Features • New basic structure • DCE still used – still no alternative • Designing a “core” server combining the name server, the bitfile server, the storage server and parts of the Client API • Replacing SFS with a commercial DBMS – DB2 – but design and coding goal is easy replacement of the DBMS • Expect considerable speed improvement • Oracle and DB2 were both ~10 times faster than SFS in a model run in ORNL’s Probe testbed • There is reduced communication between servers
HPSS Software Architecture Diagram HPSS Software Architecture Common Infrastructure Communications Security Transaction Manager Metadata Manager Logging Infrastructure Services 64-bit Math Libraries Client(s) Storage System Management - Client API - PFS M a n a g e m e n t (all components) • Applications • Data Management • System Daemons: • HSI • FTP & PFTP • - NFS • - DFS Physical Volume Respositories Physical Volume Library Storage Servers Bitfile Servers Green components are defined in the IEEE Mass Storage Reference Model. Movers Name Servers Other Modules Location Servers Migration/ Purge Repack NSL UniTree Migration Installation
New Java Admin Interface • User benefits: • Fast • Immediately portable to Unix, Windows, Macintosh • Picking up various manageability improvements • Developer benefits • Object oriented • Much code sharing • Central communication and processing engine • Different presentation engines • GUI • ASCII for the command-line interface • A third one, a Web interface, would be easy to add later • Overall maintenance much easier - code generated from HPSS C structures
Future futures • These topics are under discussion; no guarantees • In each case, a gating function is the availability of staff to do the development. • Modification to HPSS’s parallel ftp to comply with specs for GridFTP. Interest from ASCI, Argonne and others. • GPFS/HPSS interface • Participants - LLNL, LBNL, Indiana University and IBM • Seeking further help • SAN exploitation – gleam in the eye right now
Questions? http://www4.clearlake.ibm.com/hpss/ HPSS home page http://www.sdsc.edu/hpss/hpss1.html HPSS tutorial http://www.ccs.ornl.gov Center for Comp. Sci. http://www.csm.ornl.gov Computer Sci and Math Div http://www.csm.ornl.gov/PROBE Testbed