180 likes | 197 Views
Explore efficiently managing parallel computational tasks in a grid environment using brokers for tasks allocation and scheduling by clusters. Includes simulation results and computational experiments with future work suggestions.
E N D
Managing Parallel Computational Tasks in a Grid Environment Institute for System Programming Russian Academy of Sciences A.I. Avetisyan, S.S. Gaissarian, D.A. Grushin, N.N. Kuzjurin, and A.V. Shokurov
Task Description 1) maximum Pjand minimum Pjnumber of processors for task j 2) t(p), pj< p < Pj execution time on p processors 3) deadline dj 4) Cost Ij Example: Pj= pj, Ij= pj*t(pj) .
Tasks allocation by brokers: task j • Time < dj • Cost to min • Broker chooses cluster that minimize cost and satisfies the deadline
Scheduling Tasks by Clusters Scheduling parallel tasks NP-hard optimization problem Approximation algorithms, Heuristics Bottom-Left algorithm .
Geometric representation of tasks Pj=pj number of processors t(pj) execution time t(pj) pj
Scheduling parallel tasks by a cluster: notations RT - relative throughput Throughput is the number of tasks completed before deadline
Scheduling parallel tasks by a cluster: notations Density D Intensity I, Deadline T ti- execution time of task i T_total - total throughput of all clusters
Simulation • Six clusters: 2 - 512 processors • 2 - 256 processors • 2 - 128 processors Random set of tasks L% 256 < pj< 512 M% 128 < pj< 256 S% 10 < pj< 128
Simulation: notations • RT - relative throughput • Sorting tasks by their widths • (width=number of processors) • RT(unsorted) • RT(sorted)
Computational Experiments:one broker Density RT (unsorted) RT (sorted) 0.9696 0.8314 0.8783 0.9232 0.8759 0.9184 0.8542 0.9310 0.9963 0.7857 0.9861 1.0 Table 1. Relative throughput RT (L=10 %)
Computational Experiments:one broker Density RT (unsorted) RT (sorted) 0.9739 0.7703 0.8769 0.9240 0.7995 0.9308 0.9008 0.8160 0.9471 0.8008 0.9163 1.0 0.7207 1.0 1.0 Table 2. Relative throughput RT (L=20 %)
Computational Experiments:5 brokers • Small tasks: 134(46%) • Medium tasks: 94(32%) • Large tasks: 60(20%) • Deadline: 4500 • Density: 0.9915203373015873 • Allocation: 0.7258018765273988 • Cluster{512}: 43 • Cluster{512}: 42 • Cluster{256}: 42 • Cluster{256}: 39 • Cluster{128}: 52 • Cluster{128}: 47 • Not allocated: 23
Computational Experiments:5 brokers • Small tasks: 134(46%) • Medium tasks: 94(32%) • Large tasks: 60(20%) • Deadline: 4800 • Density: 0.9295503162202381 • Allocation: 0.77119447897724 • Cluster{512}: 43 • Cluster{512}: 38 • Cluster{256}: 39 • Cluster{256}: 44 • Cluster{128}: 56
Computational Experiments:5 brokers • Small tasks: 134(46%) • Medium tasks: 94(32%) • Large tasks: 60(20%) • Deadline: 5000 • Density: 0.8923683035714286 • Allocation: 0.8093943934304031 • Cluster{512}: 39 • Cluster{512}: 41 • Cluster{256}: 44 • Cluster{256}: 42 • Cluster{128}: 56 • Cluster{128}: 50 • Not allocated: 16
Computational Experiments:5 brokers • Small tasks: 134(46%) • Medium tasks: 94(32%) • Large tasks: 60(20%) • Deadline: 5980 • Density: 0.7461273441232681 • Allocation: 1.0 • Cluster{512}: 41 • Cluster{512}: 40 • Cluster{256}: 46 • Cluster{256}: 41 • Cluster{128}: 61 • Cluster{128}: 59 • Not allocated: 0
Future works • 1) better scheduling (strip-packing) algorithms • 2) migration of tasks • 3) variable execution time