900 likes | 1.07k Views
2. Parallel Machine Learning for Large-Scale Natural Graphs. Carlos Guestrin. The GraphLab Team:. Yucheng Low. Joseph Gonzalez. Aapo Kyrola. Danny Bickson. Joe Hellerstein. Jay Gu. Alex Smola. Parallelism is Difficult. Wide array of different parallel architectures:
E N D
2 Parallel Machine Learning for Large-Scale Natural Graphs Carlos Guestrin The GraphLab Team: Yucheng Low Joseph Gonzalez Aapo Kyrola Danny Bickson Joe Hellerstein Jay Gu Alex Smola
Parallelism is Difficult • Wide array of different parallel architectures: • Different challenges for each architecture GPUs Multicore Clusters Clouds Supercomputers High Level Abstractions to make things easier
... a popular answer: Map-Reduce / Hadoop Build learning algorithms on-top of high-level parallel abstractions
Map-Reduce for Data-Parallel ML • Excellent for large data-parallel tasks! Data-Parallel Graph-Parallel Map Reduce Label Propagation Lasso Feature Extraction Cross Validation Belief Propagation Kernel Methods Computing Sufficient Statistics Tensor Factorization PageRank Neural Networks Deep Belief Networks
PageRank Example • Iterate: • Where: • αis the random reset probability • L[j] is the number of links on page j 1 2 3 4 5 6
Properties of Graph Parallel Algorithms Dependency Graph LocalUpdates Iterative Computation My Rank Friends Rank
Addressing Graph-Parallel ML • We need alternatives to Map-Reduce Data-Parallel Graph-Parallel Map Reduce Pregel (Giraph)? Map Reduce? SVM Lasso Feature Extraction Cross Validation Belief Propagation Kernel Methods Computing Sufficient Statistics Tensor Factorization PageRank Neural Networks Deep Belief Networks
Pregel (Giraph) • Bulk Synchronous Parallel Model: Compute Communicate Barrier
Problem: Bulk synchronous computation can be highly inefficient
BSP Systems Problem:Curse of the Slow Job Iterations Data Data Data Data CPU 1 CPU 1 CPU 1 Data Data Data Data Data Data Data Data CPU 2 CPU 2 CPU 2 Data Data Data Data Data Data Data Data CPU 3 CPU 3 CPU 3 Data Data Data Data Data Data Data Data Barrier Barrier Barrier
Bulk synchronous computation model provably inefficient for some ML tasks
BSP ML Problem: Data-Parallel Algorithms can be Inefficient Bulk Synchronous (Pregel) Asynchronous Splash BP Limitations of bulk synchronous model can lead to provably inefficient parallel algorithms But distributed Splash BP was built from scratch… efficient, parallel implementation was painful, painful, painful to achieve
The Need for a New Abstraction • If not Pregel, then what? Data-Parallel Graph-Parallel Map Reduce Pregel (Giraph) Feature Extraction Cross Validation Belief Propagation Kernel Methods SVM Computing Sufficient Statistics Tensor Factorization PageRank Lasso Neural Networks Deep Belief Networks
The GraphLabSolution • Designed specifically for ML needs • Express data dependencies • Iterative • Simplifies the design of parallel programs: • Abstract away hardware issues • Automatic data synchronization • Addresses multiple hardware architectures • Multicore • Distributed • Cloud computing • GPU implementation in progress
The GraphLab Framework Scheduler Graph Based Data Representation Update Functions User Computation Consistency Model
Data Graph A graph with arbitrary data (C++ Objects) associated with each vertex and edge. • Graph: • Social Network • Vertex Data: • User profile text • Current interests estimates • Edge Data: • Similarity weights
Update Functions An update function is a user defined program which when applied to a vertex transforms the data in the scopeof the vertex pagerank(i, scope){ // Get Neighborhood data (R[i], Wij, R[j]) scope; // Update the vertex data // Reschedule Neighbors if needed if R[i] changes then reschedule_neighbors_of(i); } Dynamic computation
The Scheduler The scheduler determines the order that vertices are updated b d a c CPU 1 c b e f g Scheduler e f b a i k h j i h i j CPU 2 The process repeats until the scheduler is empty
The GraphLab Framework Scheduler Graph Based Data Representation Update Functions User Computation Consistency Model
Ensuring Race-Free Code • How much can computation overlap?
Inconsistent ALS Consistent Netflix data, 8 cores
Even Simple PageRank can be Dangerous GraphLab_pagerank(scope) { refsum = scope.center_value sum= 0 forall(neighbor in scope.in_neighbors) sum = sum + neighbor.value/ nbr.num_out_edges sum = ALPHA + (1-ALPHA) * sum …
Even Simple PageRank can be Dangerous CPU 1 GraphLab_pagerank(scope) { refsum = scope.center_value sum= 0 forall(neighbor in scope.in_neighbors) sum = sum + neighbor.value/ nbr.num_out_edges sum = ALPHA + (1-ALPHA) * sum … Read-write race CPU 1 reads bad PageRank estimate, as CPU 2 computes value CPU 2 Read
Race Condition Can Be Very Subtle GraphLab_pagerank(scope) { refsum = scope.center_value sum= 0 forall(neighbor in scope.in_neighbors) sum = sum + neighbor.value/ neighbor.num_out_edges sum = ALPHA + (1-ALPHA) * sum … Unstable GraphLab_pagerank(scope) { sum = 0 forall(neighbor in scope.in_neighbors) sum = sum + neighbor.value/ nbr.num_out_edges sum = ALPHA + (1-ALPHA) * sum scope.center_value= sum … Stable This was actually encountered in user code.
GraphLab Ensures Sequential Consistency For each parallel execution, there exists a sequential execution of update functions which produces the same result. time CPU 1 Parallel CPU 2 Single CPU Sequential
Consistency Rules Full Consistency Data Guaranteed sequential consistency for all update functions
Full Consistency Full Consistency
Obtaining More Parallelism Full Consistency Edge Consistency
Edge Consistency Edge Consistency Safe Read CPU 1 CPU 2
The GraphLab Framework Scheduler Graph Based Data Representation Update Functions User Computation Consistency Model
Alternating Least Squares SVD Splash Sampler CoEM Bayesian Tensor Factorization Lasso Belief Propagation PageRank LDA SVM Gibbs Sampling Dynamic Block Gibbs Sampling K-Means Matrix Factorization …Many others… Linear Solvers
GraphLab vs. Pregel (BSP) • Multicore PageRank (25M Vertices, 355M Edges) Pregel (via GraphLab) Pregel (via GraphLab) GraphLab GraphLab 51% updated only once
CoEM (Rosie Jones, 2005) Named Entity Recognition Task Is “Dog” an animal? Is “Catalina” a place? Vertices: 2 Million Edges: 200 Million the dog <X> ran quickly Australia travelled to <X> Catalina Island <X> is pleasant
CoEM (Rosie Jones, 2005) Optimal GraphLabCoEM Better 6x fewer CPUs! 15x Faster! 51
CoEM (Rosie Jones, 2005) Optimal Better Large 0.3% of Hadoop time Small
Video Cosegmentation Segments mean the same Gaussian EM clustering + BP on 3D grid Model: 10.5 million nodes, 31 million edges
Video Coseg. Speedups Ideal GraphLab
Video Coseg. Speedups GraphLab Ideal
Cost-Time Tradeoff video co-segmentation results a few machines helps a lot faster diminishingreturns more machines, higher cost
Netflix Collaborative Filtering • Alternating Least Squares Matrix Factorization Model: 0.5 million nodes, 99 million edges MPI Hadoop GraphLab Users Netflix Ideal D D=100 D=20 Movies
Multicore Abstraction Comparison • Netflix Matrix Factorization Dynamic Computation, Faster Convergence
Fault-Tolerance • Larger Problems Increased chance of Machine Failure • GraphLab2 Introduces two fault tolerance (checkpointing) mechanisms • Synchronous Snapshots • Chandi-Lamport Asynchronous Snapshots