490 likes | 829 Views
2. GraphLab Tutorial. Yucheng Low. GraphLab Team. Yucheng Low. Joseph Gonzalez. Aapo Kyrola. Danny Bickson. Carlos Guestrin. Jay Gu. Development History. GraphLab 0.5 (2010). Internal Experimental Code. Insanely Templatized. First Open Source Release (< June 2011 LGPL
E N D
2 GraphLab Tutorial Yucheng Low
GraphLab Team Yucheng Low Joseph Gonzalez Aapo Kyrola Danny Bickson Carlos Guestrin Jay Gu
Development History GraphLab 0.5 (2010) Internal Experimental Code Insanely Templatized First Open Source Release (< June 2011 LGPL >= June 2011 APL) GraphLab 1 (2011) Nearly Everything is Templatized Shared Memory : Jan 2012 Distributed : May 2012 GraphLab2 (2012) Many Things are Templatized
Graphlab 2 Technical Design Goals • Improved useability • Decreased compile time • As good or better performance than GraphLab 1 • Improved distributed scalability • … other abstraction changes … (come to the talk!)
Development History • Ever since GraphLab 1.0, all active development are open source (APL): code.google.com/p/graphlabapi/ (Even current experimental code. Activated with a --experimental flag on ./configure )
Guaranteed Target Platforms • Any x86 Linux system with gcc >= 4.2 • Any x86 Mac system with gcc 4.2.1 ( OS X 10.5 ?? ) • Other platforms? … We welcome contributors.
Tutorial Outline • GraphLab in a few slides + PageRank • Checking out GraphLab v2 • Implementing PageRank in GraphLab v2 • Overview of different GraphLab schedulers • Preview of Distributed GraphLab v2 (may not work in your checkout!) • Ongoing work… (however much as time allows)
Warning • A preview of codestill in intensive development! • Things may or may not work for you! • Interface may still change! • GraphLab 1 GraphLab 2 still has a number of performance regressions we are ironing out.
PageRank Example • Iterate: • Where: • αis the random reset probability • L[j] is the number of links on page j 1 2 3 4 5 6
The GraphLab Framework Scheduler Graph Based Data Representation Update Functions User Computation Consistency Model
Data Graph A graph with arbitrary data (C++ Objects) associated with each vertex and edge • Graph: • Link graph • Vertex Data: • Webpage • Webpage Features • Edge Data: • Link weight
The GraphLab Framework Scheduler Graph Based Data Representation Update Functions User Computation Consistency Model
Update Functions An update function is a user defined program which when applied to a vertex transforms the data in the scopeof the vertex pagerank(i, scope){ // Get Neighborhood data (R[i], Wij, R[j]) scope; // Update the vertex data // Reschedule Neighbors if needed if R[i] changes then reschedule_neighbors_of(i); }
Dynamic Schedule b d a c e f g CPU 1 b a h i k h j a b i CPU 2 Process repeats until scheduler is empty
Source Code Interjection 1 Graph, update functions, and schedulers
--scope=edge --scope=vertex
Consistency False Trade-off # “iterations” per second Trade-off Consistency “Throughput” Goal of ML algorithm: Converge
Ensuring Race-Free Code • How much can computation overlap?
The GraphLab Framework Scheduler Graph Based Data Representation Update Functions User Computation Consistency Model
Importance of Consistency Fast ML Algorithm development cycle: Build Test Debug Tweak Model Necessary for framework to behave predictably and consistently and avoid problems caused by non-determinism. Is the execution wrong? Or is the model wrong?
Full Consistency Full Consistency Guaranteed safety for all update functions
Full Consistency Full Consistency Parallel update only allowed two vertices apart Reduced opportunities for parallelism
Obtaining More Parallelism Full Consistency Not all update functions will modify the entire scope! Edge Consistency Belief Propagation: Only uses edge data Gibbs Sampling: Only needs to read adjacent vertices
Edge Consistency Edge Consistency
Obtaining More Parallelism Full Consistency Edge Consistency Vertex Consistency “Map”operations. Feature extraction on vertex data
Vertex Consistency Vertex Consistency
The GraphLab Framework Scheduler Graph Based Data Representation Update Functions User Computation Consistency Model
Shared Variables • Global aggregation through Sync Operation • A global parallel reduction over the graph data • Synced variables recomputed at defined intervals while update functions are running Sync: Loglikelihood Sync: Highest PageRank
Source Code Interjection 2 Shared variables
What can we do with these primitives? …many many things…
Matrix Factorization • Netflix Collaborative Filtering • Alternating Least Squares Matrix Factorization Model: 0.5 million nodes, 99 million edges Users Netflix d Movies
Netflix Speedup Increasing size of the matrix factorization
Video Co-Segmentation • Discover “coherent”segment types acrossa video (extends Batra et al. ‘10) • 1. Form super-voxels video • 2. EM & inference in Markov random field • Large model: 23 million nodes, 390 million edges Ideal GraphLab
Many More • Tensor Factorization • Bayesian Matrix Factorization • Graphical Model Inference/Learning • Linear SVM • EM clustering • Linear Solvers using GaBP • SVD • Etc.
GraphLab 2 Abstraction Changes (an overview couple of them) (Come to the talk for the rest!)
Exploiting Update Functors (for the greater good)
Exploiting Update Functors (for the greater good) • Update Functors store state • Scheduler schedules update functor instances. • We can use update functors as a controlled asynchronous message passing to communicate between vertices!
Delta Based Update Functors structpagerank : publiciupdate_functor<graph, pagerank> { double delta; pagerank(double d) : delta(d) { } void operator+=(pagerank& other) { delta += other.delta; } void operator()(icontext_type& context) { vertex_data& vdata = context.vertex_data(); vdata.rank += delta; if(abs(delta) > EPSILON) { doubleout_delta = delta * (1 – RESET_PROB) * 1/context.num_out_edges(edge.source()); context.schedule_out_neighbors(pagerank(out_delta)); } } }; // Initial Rank: R[i] = 0; // Initial Schedule: pagerank(RESET_PROB);
Asynchronous Message Passing • Obviously not all computation can be written this way. But when it can; it can be extremely fast.
PageRank in GraphLab structpagerank : publiciupdate_functor<graph, pagerank> { void operator()(icontext_type& context) { vertex_data& vdata = context.vertex_data(); double sum = 0; foreach( edge_typeedge, context.in_edges() ) sum += context.const_edge_data(edge).weight * context.const_vertex_data(edge.source()).rank; doubleold_rank = vdata.rank; vdata.rank = RESET_PROB + (1-RESET_PROB) * sum; double residual = abs(vdata.rank – old_rank) / context.num_out_edges(); if (residual > EPSILON) context.reschedule_out_neighbors(pagerank()); } };
PageRank in GraphLab structpagerank : publiciupdate_functor<graph, pagerank> { void operator()(icontext_type& context) { vertex_data& vdata = context.vertex_data(); double sum = 0; foreach( edge_typeedge, context.in_edges() ) sum += context.const_edge_data(edge).weight * context.const_vertex_data(edge.source()).rank; doubleold_rank = vdata.rank; vdata.rank = RESET_PROB + (1-RESET_PROB) * sum; double residual = abs(vdata.rank – old_rank) / context.num_out_edges(); if (residual > EPSILON) context.reschedule_out_neighbors(pagerank()); } }; Parallel “Sum” Gather Atomic Single Vertex Apply Parallel Scatter [Reschedule]
Decomposable Update Functors • Decompose update functions into 3 phases: Gather Apply Scatter Scope Y Y Y Y Apply the accumulated value to center vertex Parallel Sum + + … + Δ Update adjacent edgesand vertices. Y Y Y Y User Defined: User Defined: User Defined: Apply( , Δ) Gather( ) Δ Scatter( ) Y Y Δ1 + Δ2 Δ3
Factorized PageRank structpagerank : publiciupdate_functor<graph, pagerank> { double accum= 0, residual= 0; void gather(icontext_type& context, const edge_type& edge) { accum += context.const_edge_data(edge).weight * context.const_vertex_data(edge.source()).rank; } void merge(const pagerank& other) { accum += other.accum; } void apply(icontext_type& context) { vertex_data& vdata = context.vertex_data(); double old_value = vdata.rank; vdata.rank = RESET_PROB + (1 - RESET_PROB) * accum; residual = fabs(vdata.rank – old_value) / context.num_out_edges(); } void scatter(icontext_type& context, const edge_type& edge) { if (residual > EPSILON) context.schedule(edge.target(), pagerank()); } };
Ongoing Work • Extensions to improve performance on large graphs. (See the GraphLab talk later!!) • Better distributed Graph representation methods • Possibly better Graph Partitioning • Off-core Graph storage • Continually changing graphs • All New rewrite of distributed GraphLab (come back in May!)
Ongoing Work • Extensions to improve performance on large graphs. (See the GraphLab talk later!!) • Better distributed Graph representation methods • Possibly better Graph Partitioning • Off-core Graph storage • Continually changing graphs • All New rewrite of distributed GraphLab (come back in May!)