1 / 7

Non-Blocking Collective I/O Routines

Non-Blocking Collective I/O Routines. Introduction. I/O is the main bottleneck in HPC applications. To mitigate that performance, several approaches have been taken: Collective I/O for aggregation Non-blocking individual I/O Higher level libraries (PHDF5 , ADIOS, etc…). Motivation.

cleta
Download Presentation

Non-Blocking Collective I/O Routines

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Non-Blocking Collective I/O Routines

  2. Introduction • I/O is the main bottleneck in HPC applications. • To mitigate that performance, several approaches have been taken: • Collective I/O for aggregation • Non-blocking individual I/O • Higher level libraries (PHDF5 , ADIOS, etc…)

  3. Motivation • Routines for non-blocking individual I/O operations exist (MPI_File_i(read/write)(_at) • Non-blocking point-to-point (existing) and collective (to be added) communication operations have demonstrated benefits. • Split collective I/O operations have their restrictions and limitations. • What’s keeping us from adding non-blocking collective I/O operations? • Implementation

  4. New Routines • MPI_File_iread_all (MPI_File file, void *buf, int count, MPI_Datatype type, MPI_Status *status, MPI_Request *req); • MPI_File_iwrite_all (MPI_File file, void *buf, int count, MPI_Datatype type, MPI_Status *status, MPI_Request *req); • MPI_File_iread_at_all (MPI_File file, MPI_Offset offset, void *buf, int count, MPI_Datatype type, MPI_Status *status, MPI_Request *req); • MPI_File_iwrite_at_all (MPI_File file, MPI_Offset offset, void *buf, int count, MPI_Datatype type, MPI_Status *status, MPI_Request *req);

  5. Implementation • Major difference between collective communication and collective I/O operations: • each process is allowed to provide different volumes of data to a collective I/O operation, without having knowledge on the data volumes provided by other processes • Can’t post entire I/O operation to disk directly • Have to read/write data in cycles • Have to do Aggregation

  6. Implementation • Need non-blocking collective communication • Fortunately will be available • Integrate with the progress engine • Test/Wait on the request like other non-blocking operations • Explicit or Implicit progress? • Different collective I/O algorithms

  7. Conclusion • The need for non-blocking collective I/O is fairly high. • Implementation is the non-easy part. • Performance benefits can be substantial.

More Related