150 likes | 264 Views
R ig a Te c hni cal Universit y. Efficiency of small size tasks calculation in grid clusters using parallel processing. Olgerts Belmanis Jānis Kūliņš RTU ETF. RTU Cluster. Initially RTU cluster started with five servers AMD Opteron 146. 1TB
E N D
Riga Technical University Efficiency of small size tasks calculation in grid clusters using parallel processing . Olgerts Belmanis Jānis Kūliņš RTU ETF
. Krakow, CGW 07 , 15-16 Okt
RTU Cluster • Initially RTU cluster started with five servers AMD Opteron 146. 1TB • Additionaly was installed eight dual core AMD Opteron 2210 M2. • Therefore now there are 9 working nodes with 21 CPU units. • Total amount of memory is 1,8 TB. • RTU cluster successfully completed many calculation tasks including LHCB virtual organization orders. Krakow, CGW 07 , 15-16 Okt
RTU Cluster Krakow, CGW 07 , 15-16 Okt
RTU Cluster Krakow, CGW 07 , 15-16 Okt
Computing Algorithms • Serial algorithm • One task – one WN (working node); • Parts of task performed serial; • Task execution time depend on WN performance only! • Paralel algorithm • One task – several WN; • Parts of task performed: • Consecutive on separate WN • In parallel on number of WN; rezults summerizing • Task execution time depend on: • WN performance; • Network performance; • Bandwith of shared data stocks; • Type of coding. Krakow, CGW 07 , 15-16 Okt
Bottlenecks in distributive computing system Krakow, CGW 07 , 15-16 Okt
Interconnections between CPU nodes ************************************************************ task 0 ison wn03.grid.etf.rtu.lv partner= 2 task 1 ison wn10.grid.etf.rtu.lv partner= 3 task 2 ison wn10.grid.etf.rtu.lv partner= 0 task 3 ison wn10.grid.etf.rtu.lv partner= 1 ************************************************************ ***Messagesize: 1000000 *** best / avg / worst (MB/sec) task pair: 0 - 2: 103.31 / 102.29 / 53.64 task pair: 1 - 3: 371.33 / 197.63 / 134.05 OVERALL AVERAGES: 237.32 / 149.96 / 93.84 ...use of multicore servers help to achieve higher data transmission rate in MPI applications! Krakow, CGW 07 , 15-16 Okt
Local interconnection rate Transmission rate dependence of number of CPU ....MPI used number of CPU have influence to intermediate connection rate!!! Krakow, CGW 07 , 15-16 Okt
Parallel application execution time Krakow, CGW 07 , 15-16 Okt
Paralel speedup determination • During experiment multiplication of large matrixes has been done. • Test create traffic between WN more than some 10 Mb and loaded processors. • Main task of the experiment is to find beginning of horizontal part of speed up curve. • Experiment on 1 CPU in RTU cluster takes 420 seconds. Krakow, CGW 07 , 15-16 Okt
2x WN ≠ H/2 ...according to Amdal’s law that speed-up conform with 20% serial algorithm code! Krakow, CGW 07 , 15-16 Okt
Possible solutions: • Internal connection improvement: • Infiniband, Myranet….connections between WN; • Multicore WN implementation (RTUETF); • NFS network file system abandonment. • Data transfer process optimizing: • Number of flows using; • Replace standard TCP protocol to Scalable TCP; • Parallel algorithm processing optimization: • Minimize transactions between WN; • Reduce sequential part of MPI code; • Optimization of MPI threat number. • Optimization of requested resource management Krakow, CGW 07 , 15-16 Okt
. Thank you for attention! Krakow, CGW 07 , 15-16 Okt