1 / 6

HPC USER FORUM I/O PANEL April 2009 Roanoke, VA

HPC USER FORUM I/O PANEL April 2009 Roanoke, VA. Panel questions: 1 response per question Limit length to 1 slide. Q1. Parallel NFS finally is here!.

akira
Download Presentation

HPC USER FORUM I/O PANEL April 2009 Roanoke, VA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HPC USER FORUM I/O PANELApril 2009Roanoke, VA Panel questions: 1 response per question Limit length to 1 slide

  2. Q1. Parallel NFS finally is here! • With the formalization of Parallel NFS as a standard - what steps are being provided to enable this to be hosted on current (and future) platform choices? • We feel that this will be complimentary to GPFS and Lustre until it has demonstrated experience on multiple Petascale systems. • It’s complimentary in that it provides ubiquitous access for non-Linux systems • RDMA optimizations • Utilization of Drive and SSD media

  3. Q2. Parallel NFS – implementation details… • What tools are available to help optimize this (from application layer level all the way to archival stage)? What is missing and who should provide it? • We feel that PNFS is more of an access layer than a file system, so – the enablement will come from Software • Missing: • Scaling • data management: backups, replication, • MPI I/O • RDMA • Windows support is still unclear, and in many ways the Windows community needs this more than the Linux community • Community, Commercial Arena

  4. Q3. Physical media interconnects … • We all are facing complexity and cost issues. With IB or 10 GbE (40/100 GbE) : where should the HPC community focus its resources on - for all I/O? • All I/O – Ethernet, but this is impractical because it is by definition slower than IB – and always will be b/c the industry moves slow. • So long as HPC will break the mold, we’ll have to support both

  5. Q4. Layer protocols above the interconnects • Too many standards. interconnects, media layers are issues today. iSCSI/FCOE/FcoCEE/FCoIB have all been touted as the solution(s). Is it even relevant in the HPC arena? Is fragmentation the only choice? • Probably going to be an issue for a long time • DDN SFA platform will eliminate much of the need for SCSI, however support for NAS protocols will still be there • So – it’s somewhat relevant, SCSI is somewhat needless (HPC is mostly file I/O, not block I/O) – and many (like DDN) will accelerate and simplify.

  6. Q5. I/O issues not yet addressed? • What do you consider to be the top 3 main (technical or human) issues in HPC I/O? • Efficient Utilization of Media (Spinning or Otherwise) • Data Integrity • Quality of HPC File Storage (not just file systems, but the alignment of HPC file systems with proper HPC storage devices)

More Related