260 likes | 366 Views
Emalayan Vairavanathan. A Workflow-Aware Storage System. Samer Al- Kiswany , Lauro Beltrão Costa, Zhao Zhang, Daniel S. Katz, Michael Wilde, Matei Ripeanu. Workflow Example - ModFTDock. Protein docking application Simulates a more complex protein model from two known proteins
E N D
EmalayanVairavanathan A Workflow-Aware Storage System • Samer Al-Kiswany, LauroBeltrão Costa, Zhao Zhang, Daniel S. Katz, • Michael Wilde, MateiRipeanu
Workflow Example - ModFTDock • Protein docking application • Simulates a more complex protein model from two known proteins • Applications • Drugs design • Protein interaction prediction
App. task App. task App. task App. task App. task Local storage Local storage Local storage Local storage Local storage Background – ModFTDock in Argonne BG/P 1.2 M Docking Tasks Workflow Runtime Engine File based communication Large IO volume Scale: 40960 Compute nodes IO rate : 8GBps = 51KBps / core Backend file system (e.g., GPFS, NFS)
Background – Backend Storage Bottleneck • Storage is one of the main bottlenecks for workflows Scheduling and Idle 40% Montage workflow (512 BG/P cores, GPFS backend file system) Source [Zhao et. al]
App. task App. task App. task Local storage Local storage Local storage Intermediate Storage Approach Workflow Runtime Engine Source [Zhao et. al] MTAGS 2008 Scale: 40960 Compute nodes … POSIX API Stage Out Intermediate Storage Stage In Backend file system (e.g., GPFS, NFS)
Research Question How can we improve the storage performance for workflow applications?
IO-Patterns in Workflow Applications – by Justin Wozniak et al PDSW’09 • Pipeline • Broadcast • Reduce • Scatter • and Gather • Locality and • location-aware scheduling • Replication • Collocation and • location-aware scheduling • Block-level data placement
IO-Patterns in ModFTDock Stage - 1 Broadcast pattern Stage - 2 Reduce pattern ModFTDock Stage - 3 Pipeline pattern • 1.2 M Dock, 12000 Merge and Score instances at large run • Average file size 100 KB– 75 MB
Research Question How can we improve the storage performance for workflow applications? Our Answer Workflow-aware storage: Optimizing the storage for IO patterns • Traditional approach: One size fits all • Our approach: File / block-level optimizations
App. task App. task App. task Local storage Local storage Local storage Integrating with the workflow runtime engine Application hints (e.g., indicating access patterns) Workflow Runtime Engine Compute Nodes … POSIX API Storage hints (e.g., location information) Workflow-aware storage (shared) Stage In/Out Backend file system (e.g., GPFS, NFS)
Outline • Background • IO Patterns • Workflow-aware storage system: Implementation • Evaluation
Implementation: MosaStore • File is divided into fixed size chunks. • Chunks: stored on the storage nodes. • Manager maintains a block-map for each file • POSIX interface for accessing the system MosaStore distributed storage architecture
Implementation: Workflow-aware Storage System Workflow-aware storage architecture
Implementation: Workflow-aware Storage System • Optimized data placement for the pipeline pattern • Priority to local writes and reads • Optimized data placement for the reduce pattern • Collocating files in a single storage node • Replication mechanism optimized for the broadcast • pattern • Parallel replication • Exposing file location to workflow runtime engine
Outline • Background • IO Patterns • Workflow-aware storage system: Implementation • Evaluation
App. task App. task App. task Local storage Local storage Local storage Evaluation - Baselines Local storage Compute Nodes • MosaStore, NFS and • Node-local storage • vs Workflow-aware storage … MosaStore Intermediate storage (shared) Workflow-aware storage Stage In/Out NFS Backend file system (e.g., GPFS, NFS)
Evaluation - Platform • Cluster of 20 machines. • Intel Xeon 4-core, 2.33-GHz CPU, 4-GB RAM, 1-Gbps NIC, and a RAID-1 on two 300-GB 7200-rpm SATA disks • Backend storage NFS server • Intel Xeon E5345 8-core, 2.33-GHz CPU, 8-GB RAM, 1-Gbps NIC, and a 6 SATA disks in a RAID 5 configuration NFS server is better provisioned
Evaluation – Benchmarks and Application Synthetic benchmark • Application and workflow run-time engine • ModFTDock
Synthetic Benchmark - Pipeline • Optimization: Locality and location-aware scheduling Average runtime for medium workload
Synthetic Benchmarks - Reduce • Optimization: Collocation and location-aware scheduling Average runtime for medium workload
Synthetic Benchmarks - Broadcast • Optimization: Replication Average runtime for medium workload
Not everything is perfect ! Average runtime for small workload (pipeline, broadcast and reduce benchmarks)
Evaluation – ModFTDock Total application time on three different systems ModFTDock workflow
Evaluation – Highlights • WASS shows considerable performance gain with all the benchmarks on medium and large workload (up to 18x faster than NFS and up to 2x faster than MosaStore). • ModFTDock is 20% faster on WASS than on MosaStore, and more than 2x faster than running on NFS. • WASS provides lower performance with small benchmarks due to metadata overheads and manager latency.
Summary • Problem • How can we improve the storage performance for workflow applications? • Approach • Workflow aware storage system (WASS) • From backend storage to intermediate storage • Bi-directional communication using hints • Future work • Integrating more applications • Large scale evaluation
THANK YOU MosaStore:netsyslab.ece.ubc.ca/wiki/index.php/MosaStore Networked Systems Laboratory: netsyslab.ece.ubc.ca