120 likes | 280 Views
TeraGrid: Logical Site Model. Chaitan Baru Data and Knowledge Systems San Diego Supercomputer Center. National Science Foundation TeraGrid. Prototype for Cyberinfrastructure (the “lower” levels) High Performance Network: 40 Gb/s backbone, 30 Gb/s to each site
E N D
TeraGrid:Logical Site Model Chaitan Baru Data and Knowledge Systems San Diego Supercomputer Center
National Science Foundation TeraGrid • Prototype for Cyberinfrastructure (the “lower” levels) • High Performance Network: 40 Gb/s backbone, 30 Gb/s to each site • National Reach: SDSC, NCSA, CIT, ANL, PSC • Over 20 Teraflops compute power • Approx. 1 PB rotating Storage • Extending by 2-3 sites in Fall 2003
Data from sensors SDSC Focus on Data: A Cyberinfrastructure “Killer App” • Over the next decade, data will come from everywhere • Scientific instruments • Experiments • Sensors and sensornets • New devices (personal digital devices, computer-enabled clothing, cars, …) • And be used by everyone • Scientists • Consumers • Educators • General public • SW environment will need to support unprecedented diversity, globalization, integration, scale, and use Data from instruments Data from simulations Data from analysis
DBMS disk (~10TB) SDSC Machine Room Data Architecture • .5 PB disk • 6 PB archive • 1 GB/s disk-to-tape • Support for DB2 /Oracle • Enable SDSC to be the grid data engine LAN (multiple GbE, TCP/IP) Local Disk (50TB) Power 4 DB Blue Horizon WAN (30 Gb/s) Power 4 HPSS Sun F15K Linux Cluster, 4TF SAN (2 Gb/s, SCSI) SCSI/IP or FC/IP 30 MB/s per drive 200 MB/s per controller FC GPFS Disk (100TB) FC Disk Cache (400 TB) Database Engine Data Miner Vis Engine Silos and Tape, 6 PB, 1 GB/sec disk to tape 32 tape drives
The TeraGrid Logical Site View • Ideally, applications / users would like to see: • One single computer • Global everything: filesystem, HSM, database system • With highest possible performance • We will get there in steps • Meanwhile, the TeraGrid Logical Site View provides a uniform view of sites • A common abstraction supported by every site
Logical Site View • Logical Site View is currently simply provided as a set of environment variables • Can easily become a set of services • This is minimum required to enable a TG application to easily make use of TG storage resources • However, for “power” users, we also anticipate the need to expose mapping from logical to physical resources at each site • Enables applications to take advantage of site-specific configurations and obtain optimal performance
Basic Data Operations • The Data WG has stated as a minimum requirement: • the ability for a user to transfer data between any TG storage resource to memory on any TG compute resource – possibly via the use of an intermediate storage resource • Ability to transfer data between any two TG storage resources
Compute Cluster Staging Area DBMS Staging Area Staging Area Logical Site View “Network” Staging Area HSM Compute Cluster Collection Management DBMS Scratch
Environment Variables • TG_NODE_SCRATCH • TG_CLUSTER_SCRATCH • TG_GLOBAL_SCRATCH • TG_SITE_SCRATCH…? • TG_CLUSTER_HOME • TG_GLOBAL_HOME • TG_STAGING • TG_PFS • TG_PFS_GPFS, TG_PFS_PVFS, TG_PFS_LUSTRE • TG_SRB_STAGING
Issues Under Consideration • Suppose a user wants to run computation, C, on data, D • The TG middleware should automatically figure out • Whether C should move to where D is, or vice versa • Whether data, D, should be pre-fetched, or “streamed” • Whether output data should be streamed to persistent storage, or staged via intermediate storage • Whether prefetch/staging time ought to be “charged” to the user or not