1 / 10

ENFS I/O structure on Cplant™

Cplant™ I/O A Quick Discussion and How-To Ruth Klundt (rklundt@sandia.gov) Lee Ward (lee@sandia.gov) Dept. 9223. I/O. I/O. I/O. I/O. I/O. I/O. I/O. I/O. I/O. I/O. I/O. I/O. File server SGI. ENFS I/O structure on Cplant™. Compute Partition Ross/Ross2 ~1500 West 256 Alaska 256.

penha
Download Presentation

ENFS I/O structure on Cplant™

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cplant™ I/OA Quick Discussion and How-ToRuth Klundt (rklundt@sandia.gov)Lee Ward (lee@sandia.gov)Dept. 9223

  2. I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O File server SGI ENFS I/O structure on Cplant™ Compute Partition Ross/Ross2 ~1500 West 256 Alaska 256 Myrinet I/O nodes running ENFS daemon 12 on Ross/Ross2, 9 on West, 4 on Alaska File system also mounted on service nodes Gigabit Ethernet SRN: Endeavor, cross mounted to Tesla SON: Discovery

  3. I/O File server SGI Yod I/O structure on Cplant™ Myrinet Service Using /home Using /enfs/tmp without enfs: prefix Ethernet Myrinet Gigabit Ethernet NFS /home

  4. ENFS - Extended NFS • A scalable I/O solution running on Cplant clusters • ENFS daemons run on dedicated I/O nodes • Each I/O node is an independent data path within the cluster, enabling parallel file access • ENFS daemons on I/O nodes collect external file server(s) into one tree, under /enfs, and present them to Cplant compute and service nodes • I/O rates significantly enhanced while maintaining a unified name space

  5. Advantages • Aggregate transfer rate can be up to 20 times better than yod I/O for 16+ process jobs, depending on traffic • Typically faster than yod I/O even for 1 processor job • Avoids serialization of I/O through service node, which can negatively impact interactive response • Files are also available on compile platforms and visualization machines (SGI platforms) • More space available, no quotas

  6. Disadvantages • No synchronization of operations from different compute nodes • Overlapping writes are not supported • No file locking • File size limitation of 2GB imposed by current Linux kernel version, also true through yod I/O

  7. Using ENFS in Parallel Mode From a service node, compile node or tesla/discovery: #> cd /enfs/tmp #> mkdir username #> unix commands should work Within the code, specify the filename as: enfs:/enfs/tmp/username/path_to_file/filename That’s it…

  8. Fortran Example C ----------------------------------------------------- C Very short example of opening a file on /enfs/tmp C ----------------------------------------------------- program main open(11,file=‘enfs:/enfs/tmp/rklundt/ftest’) write(11,*) ‘Hello fortran’ close(11) stop end

  9. C Example /* ** Very short example of opening a file on /enfs/tmp */ #include <stdio.h> int main (int argc, char** argv) { file = fopen(“enfs:/enfs/tmp/rklundt/ctest”, “w”); fprintf(file, “Hello C\n”); close(file); }

  10. This website has more extensive Fortran and C example codes: http://www.cs.sandia.gov/cplant/doc/io/ENFS_User_Doc.html

More Related