1 / 17

Facilities and How They Are Used ORNL/Probe

Facilities and How They Are Used ORNL/Probe. Randy Burris Dan Million – facility administrator Mike Gleicher – development contractor Florence Fowler – network research and linux support. Overview. Background Facilities for Data Analysis Facilities for file transfers and Grid

yamin
Download Presentation

Facilities and How They Are Used ORNL/Probe

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Facilities and How They Are UsedORNL/Probe Randy Burris Dan Million – facility administrator Mike Gleicher – development contractor Florence Fowler – network research and linux support

  2. Overview • Background • Facilities for Data Analysis • Facilities for file transfers and Grid • Facilities for file system integration and improved access to tertiary storage

  3. Backgrouhd – Goals • Provide an environment for SDM ISIC activities • Provide an environment for SciDAC applications • Climate • Terascale Supernova Initiative • High Energy Nuclear Physics • Provide an environment that is appropriate for production deployment • Support and perform R&D on improved tertiary storage access

  4. Background – Supplying the environments • Predict equipment needs, assuming: • Large memory • Large storage capacity – disk and tape • Good bandwidth – network and storage • Moderate CPU power • AIX or RedHat Linux 7.2 operating systems • Provide basic software (compiler) • Respond to subsequent requests

  5. Production HPSS Probe HPSS Alice cluster OSCAR; PVFS Argonne cluster PVFS Each Dual P-III 240 GB Each: One Athlon 480 GB Facilities for Data Analysis External Networks ESnet OC12 OC192 AIX S80 6-processor 467 MHz RS64-II, 2 GB AIX RS/6000 p630 4-processor 1 GHz Power 4; 4 GB AIX RS/6000 H70 4-processor 340 MHz RS64-II; 4 GB Linux 4 processor 1.4 GHz Xeon 8 GB 1 TB FibreChannel 200 GB SCSI RAID 360 GB internal 360 GB T3 FibreChannel FibreChannel Switch 180 GB 180 GB 1 TB FibreChannel 1 TB FibreChannel 1 TB FibreChannel

  6. Data Analysis Software Environment - AIX and linux Java Rdata analysis GGobi data visualization gnu Fortran, C gnu Fortran, C AIX Fortran, C

  7. Facilities for file transfers and Grid Unassigned or floating External Networks ESnet OC12 OC192 IBM B80 2 proc, 1 GB AIX 2 proc 2 GB linux Sun E450 512 MB Solaris (Grid nodes) IBM 44P 1 proc, 512 AIX Sun E250 512 MB Solaris 2 proc 2 GB linux 1 TB FibreChannel 1 TB FibreChannel 300 GB SCSI 1 TB FibreChannel 300 GB SCSI 180 GB FibreChannel Globus 2.0 on AIX Globus 2.0 on Solaris HRM research on Solaris

  8. Probe Production ORNL Production HPSS ORNL Probe HPSS HRM Research Environment NERSC Production HPSS Disk Cache STK Library STK Library Sleepy Sun E250 pftp, hsi, GridFTP Globus 2.0 Two-stripe STK Tape Drive STK Tape Drive 180 GB Sun FibreChannel RAID 180 GB Sun FibreChannel RAID

  9. Probe Production Production HPSS Probe HPSS HRM Research Environment, soon NERSC Production HPSS Disk Cache Sleepy Sun E250 pftp, hsi, GridFTP Globus 2.0 STK Library STK Library Brookhaven Production HPSS 3 TB Tape storaage 1 TB FibreChannel RAID Selected STAR data

  10. Improving tertiary access • Improve end-user access (hsi) • Investigate linking PVFS and HPSS • Study HPSS I/O involving Unix files • Investigate use of HPSS linux movers • Monitor (and evaluate when available) HPSS I/O redirection • Coordinate with a proposed project • Shared very large disk storage • Shared file system • HPSS integrated with file system and sharing the disk capacity

  11. HSI improvements • Added Grid authentication service • Implemented partial file transfers • Added scheduling mechanism • Find all files to be sent • Sort by tape and location on tape

  12. Probe HPSS Alice cluster OSCAR; PVFS “Argonne” cluster PVFS Each Dual P-III 240 GB Each Dual Athlon 480 GB File system integration 2 proc 2 GB linux 1 TB FibreChannel 1 TB FibreChannel 1 TB FibreChannel

  13. Alice cluster OSCAR; PVFS Argonne cluster PVFS Each Dual P-III 240 GB Each Dual Athlon 480 GB HPSS, Unix files and linux 2 proc 2 GB linux RAIDzone 2 proc, 1 GB 2 TB RAID XFS; linux RAIDzone – use to study: HPSS I/O involving Unix files HPSS linux mover NFS with HPSS archive 1 TB FibreChannel 1 TB FibreChannel 1 TB FibreChannel

  14. Since 10PM yesterday… • To support • SciDAC SDM and SciDAC Applications • Research (I/O to/from Cray X1) • Production • New equipment on order; hope to get this month • Two more Dell 2650 linux processors • 2.4 GHz, 2 GB, 5 73GB disks • Four more 1 TB FibreChannel RAID arrays • Two more IBM servers • Tape cartridges sufficient for 30 TB using existing tape drives

  15. Questions and answers • What goals/accomplishments by Feb 2003? • All equipment installed and operational • HPSS/RAIDzone linux mover testing complete • Link between PVFS and HPSS (linux mover and/or hsi) tested • Star data residing in ORNL HPSS (Probe or production) • Some data passed over OC192 link • Why are the goals important (and to whom)? • Data analysis staff need big-resource node • Visualization activities in support of TSI need disk cache and flexible computational resources • HPSS I/O of Unix files is central to many HPSS plans • Users of PVFS should have an archive capability • HRM needs three data sites for thorough testing of selection algorithms • OC192 bandwidth can satisfy a lot of bulk WAN transfer needs

  16. Questions and answers, continued • Which scientific domain? HENP and TSI, mostly • Why is the work significant? Who will use it? • Visualization of massive data is necessary for scientific research. This facility targets storage and movement of massive data. • Application scientists and ISIC researchers • Are others doing the same work? No. • Status at end of three years? Achievable? • HPSS and filesystems will be better integrated • Access to HPSS will be faster and easier • WAN throughput will be greatly improved • Will unsolved problems remain? Proposals? • There is never enough bandwidth or effective throughput • Filesystems and equipment are evolving rapidly; keeping up is tough • Distributed operations – data analysis, visualization – will remain challenges

  17. Discussion?

More Related