290 likes | 296 Views
This presentation discusses HJ-Hadoop, an optimized MapReduce runtime system for multi-core systems. It explores the scalability, reliability, and popularity of Hadoop as a platform for big data analytics. The presentation also introduces Habanero Java (HJ), a programming language and runtime developed at Rice University optimized for multi-core systems.
E N D
HJ-HadoopAn Optimized MapReduce Runtime for Multi-core Systems Yunming Zhang Advised by: Prof. Alan Cox and Vivek Sarkar Rice University
Slide borrowed from Prof. Geoffrey Fox’s presentation Social Informatics
Big Data Era 2008 20 PB/day 100 PB media 2012 120 PB cluster 2011 Bing ~ 300 PB 2012 Slide borrowed from Florin Dinu’s presentation
MapReduce Runtime Map Reduce Map Map Reduce Job Starts Job Ends ……. Figure 1. Map Reduce Programming Model
Hadoop Map Reduce • Open source implementation of Map Reduce Runtime system • Scalable • Reliable • Available • Popular platform for big data analytics
Habanero Java(HJ) • Programming Language and Runtime Developed at Rice University • Optimized for multi-core systems • Lightweight async task • Work sharing runtime • Dynamic task parallelism • http://habanero.rice.edu
Kmeans Machine1 Machine1 Machine1 Machine1 Machine1 Full Lookup Table Full Lookup Table Map task in a JVM Map task in a JVM Map task in a JVM Map task in a JVM Map task in a JVM Computation Computation Computation Computation Computation Memory Memory Memory Memory Memory Map: Look up a Key in Lookup Table Map: Look up a Key in Lookup Table Kmeans is an application that takes as input a large number of documents and try to classify them into different topics Look up key Look up key Look up key Look up key Look up key Topics ClusterCentroids Duplicated Tables Duplicated Tables Duplicated Tables Duplicated Tables Duplicated Tables Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Thread 1 Thread 1 To be Classified Documents To be Classified Documents Join Table Join Table Join Table Join Table Join Table Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x <Key, Val> pairs… <Key, Val> pairs… JoinTable JoinTable Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x slice1 slice1 slice1 slice1 slice1 slice2 slice2 slice2 slice2 slice2 Reducer1 Reducer1 Reducer1 Reducer1 Reducer1 Slice 1 Slice 1 Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x slice3 slice3 slice3 slice3 slice3 <Key, Val> pairs… <Key, Val> pairs… slice4 slice4 slice4 slice4 slice4 Machine2 Machine2 Machine2 Machine2 Machine2 Reducer Reducer Slice 2 Slice 2 slice5 slice5 slice5 slice5 slice5 Full Lookup Table Full Lookup Table slice6 slice6 slice6 slice6 slice6 Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Reducer1 Reducer1 Reducer1 Reducer1 Reducer1 Thread 1 Thread 1 slice7 slice7 slice7 slice7 slice7 Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x slice8 slice8 slice8 slice8 slice8 Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Slices …. Slices …. …. …. …. …. …. Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Full Lookup Table 1x Slice n Slice n Machines….. Machines….. Machines….. Machines….. Machines…..
Kmeans using Hadoop To be classified documents Map task in a JVM Machine 1 Computation Memory slice1 slice2 slice3 slice4 slice5 slice6 slice7 slice8 Machines … …. To be classified documents Machine1 Map task in a JVM slice1 Computation Memory slice2 slice3 Duplicated In-memory Cluster Centroids slice4 slice5 slice6 slice7 Cluster Centroids 1x Cluster Centroids 1x Topics Cluster Centroids 1x Cluster Centroids 1x slice8 …. Machines …
Kmeans using Hadoop To be classified documents Map task in a JVM Machine 1 Computation Memory slice1 slice2 slice3 slice4 slice5 slice6 slice7 slice8 Machines … …. To be classified documents Machine1 Map task in a JVM slice1 Computation Memory slice2 slice3 Duplicated In-memory Cluster Centroids slice4 slice5 slice6 slice7 Cluster Centroids 1x Topics Cluster Centroids 1x Cluster Centroids 1x Topics Cluster Centroids 1x slice8 …. Machines …
Kmeans using Hadoop To be classified documents Map task in a JVM Machine 1 Computation Memory slice1 slice2 slice3 slice4 slice5 slice6 slice7 slice8 Machines … …. To be classified documents Machine1 Map task in a JVM slice1 Computation Memory slice2 slice3 Duplicated In-memory Cluster Centroids slice4 slice5 slice6 slice7 Cluster Centroids 1x Topics Topics Topics Cluster Centroids 1x Cluster Centroids 1x Cluster Centroids 1x slice8 …. Machines …
Kmeans using Hadoop To be classified documents Map task in a JVM Machine 1 Computation Memory slice1 slice2 slice3 slice4 slice5 slice6 slice7 slice8 Machines … …. To be classified documents Machine1 Map task in a JVM slice1 Computation Memory slice2 slice3 Duplicated In-memory Cluster Centroids slice4 slice5 slice6 slice7 Cluster Centroids 1x Topics Topics Topics Topics Cluster Centroids 1x Cluster Centroids 1x Cluster Centroids 1x slice8 …. Machines …
Kmeans using Hadoop To be classified documents Map task in a JVM Machine 1 Computation Memory slice1 slice2 Duplicated In-memory Cluster Centroids slice3 slice4 slice5 slice6 slice7 slice8 Machines … …. To be classified documents Machine1 Map task in a JVM slice1 Computation Memory slice2 slice3 Duplicated In-memory Cluster Centroids slice4 slice5 slice6 slice7 Cluster Centroids 1x Cluster Centroids 1x Cluster Centroids 1x Topics 1x Topics 1x Topics 1x Topics 1x Cluster Centroids 1x slice8 …. Machines …
Memory Wall We used 8 mappers from 30 -80 MB, 4 mappers for 100 – 150 MB, 2 mappers for 180 – 380 for sequential Hadoop.
Memory Wall • Hadoop’s approach to the problem • Increase the memory available to each Map Task JVM by reducing the number of map tasks assigned to each machine.
Kmeans using Hadoop To be classified documents Map task in a JVM Machine 1 Computation Memory slice1 Topics 2x slice2 slice3 slice4 Topics 2x slice5 slice6 slice7 slice8 Machines … …. To be classified documents Machine1 Map task in a JVM slice1 Computation Memory slice2 slice3 Duplicated In-memory Cluster Centroids slice4 slice5 slice6 slice7 Cluster Centroids 1x Cluster Centroids 1x Cluster Centroids 1x Cluster Centroids 1x slice8 …. Machines …
Memory Wall Decreased throughput due to reduced number of map tasks per machine We used 8 mappers from 30 -80 MB, 4 mappers for 100 – 150 MB, 2 mappers for 180 – 380 for sequential Hadoop.
HJ-HadoopApproach1 Map task in a JVM To be classified documents Machine1 Computation Memory Dynamic chunking Machines … To be classified documents Map task in a JVM Machine1 slice1 slice1 Computation Memory slice2 slice2 slice3 slice3 Dynamic chunking No Duplicated In-memory Cluster Centroids slice4 slice4 slice5 slice5 slice6 slice6 slice7 slice7 Cluster Centroids 4x Topics 4x slice8 slice8 …. …. Machines …
HJ-HadoopApproach1 Map task in a JVM To be classified documents Machine1 Computation Memory Dynamic chunking No Duplicated In-memory Cluster Centroids Machines … To be classified documents Map task in a JVM Machine1 slice1 slice1 Computation Memory slice2 slice2 slice3 slice3 Dynamic chunking No Duplicated In-memory Cluster Centroids slice4 slice4 slice5 slice5 slice6 slice6 slice7 slice7 Cluster Centroids 4x Topics 4x slice8 slice8 …. …. Machines …
Results We used 2 mappers for HJ-Hadoop
Results Process 5x Topics efficiently We used 2 mappers for HJ-Hadoop
Results 4x throughput improvement We used 2 mappers for HJ-Hadoop
HJ-HadoopApproach1 Map task in a JVM To be classified documents Machine1 Computation Memory Dynamic chunking Machines … To be classified documents Map task in a JVM Machine1 slice1 slice1 Computation Memory slice2 slice2 slice3 slice3 Dynamic chunking No Duplicated In-memory Cluster Centroids slice4 slice4 slice5 slice5 slice6 slice6 slice7 slice7 Cluster Centroids 4x Topics 4x slice8 slice8 …. …. Machines …
HJ-HadoopApproach1 Onlyasinglethreadreadinginput Map task in a JVM Machine1 Computation Memory Dynamic chunking Machines … To be classified documents Map task in a JVM Machine1 slice1 slice1 Computation Memory slice2 slice2 slice3 slice3 Dynamic chunking No Duplicated In-memory Cluster Centroids slice4 slice4 slice5 slice5 slice6 slice6 slice7 slice7 Cluster Centroids 4x Topics 4x slice8 slice8 …. …. Machines …
Kmeans using Hadoop Fourthreadsreadinginput Map task in a JVM Machine 1 Computation Memory slice1 slice2 slice3 slice4 slice5 slice6 slice7 slice8 Machines … …. To be classified documents Machine1 Map task in a JVM slice1 Computation Memory slice2 slice3 Duplicated In-memory Cluster Centroids slice4 slice5 slice6 slice7 Cluster Centroids 1x Cluster Centroids 1x Cluster Centroids 1x Cluster Centroids 1x Topics Topics Topics Topics slice8 …. Machines …
HJ-HadoopApproach2 Fourthreadsreadinginput Map task in a JVM Machine 1 Computation Memory slice1 slice2 slice3 slice4 slice5 slice6 slice7 slice8 Machines … …. To be classified documents Machine1 Map task in a JVM slice1 Computation Memory slice2 slice3 Duplicated In-memory Cluster Centroids slice4 slice5 slice6 slice7 Topics Cluster Centroids 1x Cluster Centroids 1x Cluster Centroids 1x Cluster Centroids 1x slice8 …. Machines …
HJ-HadoopApproach2 Fourthreadsreadinginput Map task in a JVM Machine 1 Computation Memory slice1 slice2 No Duplicated In-memory Cluster Centroids slice3 slice4 slice5 slice6 slice7 slice8 Machines … …. To be classified documents Machine1 Map task in a JVM slice1 Computation Memory slice2 slice3 Duplicated In-memory Cluster Centroids slice4 slice5 slice6 slice7 Topics Cluster Centroids 1x Cluster Centroids 1x Cluster Centroids 1x Cluster Centroids 1x slice8 …. Machines …
Trade Offs between the two approaches • Approach1 • Minimummemoryoverhead • ImprovedCPUutilizationwithsmalltaskgranularity • Approach2 • ImprovedIOperformance • OverlapbetweenIOandComputation • Hybrid Approach • Improved IO with small memory overhead • Improved CPU utilization
Conclusions • Our goal is to tackle the memory inefficiency in the execution of MapReduce applications on multi-core systems by integrating a shared memory parallel model into Hadoop MapReduce runtime • HJ-Hadoop can be used to solve larger problems efficiently than Hadoop, processing process 5x more data at full throughput of the system • The HJ-Hadoop can deliver a 4x throughput relative to Hadoop processing very large in-memory data sets