1 / 23

Venkatram Ramanathan

Venkatram Ramanathan. Parallelizing a Co-Clustering Application with a Reduction Based Framework on Multi-Core Clusters. Motivation Evolution of Multi-Core Machines and the challenges Background: MapReduce and FREERIDE Co-clustering on FREERIDE Experimental Evaluation Conclusion.

naeva
Download Presentation

Venkatram Ramanathan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VenkatramRamanathan Parallelizing a Co-Clustering Application with a Reduction Based Framework on Multi-Core Clusters

  2. Motivation Evolution of Multi-Core Machines and the challenges Background: MapReduce and FREERIDE Co-clustering on FREERIDE Experimental Evaluation Conclusion Outline

  3. Performance Increase: Increased number of cores with lower clock frequencies Cost Effective Scalability of performance HPC Environments – Cluster of Multi-Cores Motivation - Evolution Of Multi-Core Machines

  4. Multi-Level Parallelism Within Cores in a node – Shared Memory Parallelism - Pthreads, OpenMP Within Nodes – Distributed Memory Parallelism - MPI Achieving Programmability and Performance – Major Challenge Challenges

  5. Possible solution Use higher-level/restricted APIs Reduction based APIs Map-Reduce Higher-level API Program Cluster of Multi-Cores with 1 API Expressive Power Considered Limited Expressing computations using reduction-based APIs Challenges

  6. MapReduce Map (in_key,in_value) -> list(out_key,intermediate_value) Reduce(out_key,list(intermediate_value) -> list(out_value) FREERIDE Users explicitly declare Reduction Object and update it Map and Reduce steps combined Each data element – processed and reduced before next element is processed Background

  7. MapReduce and FREERIDE: Comparison

  8. Involves simultaneous clustering of rows to row clusters and columns to column clusters Maximizes Mutual Information Uses Kullback-Leibler Divergence Co-clustering

  9. Overview of Co-clustering Algorithm – Preprocessing

  10. Overview of Co-clustering Algorithm – Iterative Procedure

  11. Input matrix and its transpose pre-computed Input matrix and transpose Divided into files Distributed among nodes Each node - same amount of row and column data rowCL and colCL – replicated on all nodes Initial clustering Round robin fashion - consistency across nodes Parallelizing Co-clustering on FREERIDE

  12. In Preprocessing, pX and pY – normalized by total sum Wait till all nodes process to normalize Each node calculates pX and pY with local data Reduction object updated partial sum, pX and pY values Accumulated partial sums - total sum pX and pY normalized xnorm and ynorm calculated in second iteration as they need total sum Parallelizing Preprocess Step

  13. Compressed Matrix of size #rowclusters x #colclusters, calculated with local data Sum of values of values of each row cluster across each column cluster Final compressed matrix -sum of local compressed matrices Local compressed matrices – updated in reduction object Produces final compressed matrix on accumulation Cluster Centroids calculated Parallelizing Preprocess Step

  14. Reassign clustering Determined by Kullback-Leibler divergence Reduction object updated Compute compressed matrix Update reduction object Column Clustering – similar Objective function – finalize Next iteration Parallelizing Iterative Procedure

  15. Parallelizing Co-clustering on FREERIDE

  16. Parallelizing Iterative Procedure

  17. Algorithm - same for shared memory, distributed memory and hybrid parallelization Experiments conducted 2 clusters env1 Intel Xeon E5345 Quad Core Clock Frequency 2.33 GHz Main Memory 6 GB 8 nodes env2 AMD Opteron 8350 CPU 8 Cores Main Memory 16 GB 4 Nodes Experimental Results

  18. 2 Datasets 1 GB Dataset Matrix Dimensions 16k x 16k 4 GB Dataset Matrix Dimensions 32k x 32k Datasets and transpose Split into 32 files each (row partitioning) Distributed among nodes Number of row and column clusters: 4 Experimental Results

  19. Experimental Results

  20. Experimental Results

  21. Experimental Results

  22. Preprocessing stage – bottleneck for smaller dataset – not compute intensive Speedup with Preprocessing : 12.17 Speedup without Preprocessing: 18.75 Preprocessing stage scales well for Larger dataset – more computation Speedup is the same with and without preprocessing. Speedup for larger dataset : 20.7 Experimental Results

  23. FREERIDE Offers the Following Advantages: No need for loading data in custom file-systems C/C++ based framework Much better performance (comparison for other algorithms) Co-clusterings can be viewed as generalized reduction Implementing them on FREERIDE Speedup of 21 on 32 cores. Conclusion

More Related