690 likes | 788 Views
Compilation of Socialite, Dedalus & WebDAMLog A Spontaneous Talk. Christoph Koch EPFL DATA Lab christoph.koch@epfl.ch http://data.epfl.ch. Warning: I decided to prepare this talk today at 1:30pm !!!111!!. This talk. Some of you know the DBToaster project:
E N D
Compilation of Socialite, Dedalus & WebDAMLogA Spontaneous Talk Christoph Koch EPFL DATA Lab christoph.koch@epfl.ch http://data.epfl.ch
Warning: I decided to prepare this talk today at 1:30pm !!!111!! This talk
Some of you know the DBToaster project: • Aggressive incremental view maintenance + compilation. • Recently, we have made our compiled code much faster. • New goal: compile languages with recursion: • Socialite (Lam/Stanford): graph analytics • Dynamic languages: WebDAMLog, Bloom, Dedalus • But: this work is not ready. I will only sketch challenges and tell you about our status. • A talk to friends on work in progress is probably better than utterly boring them with something unrelated to this workshop. Goal
Talk Overview Classic DBToaster The new backend: Scala & LMS Towards a compiler for WebDAMLog
Use Cases of DBToaster • Online/real-time analytics • Real-time data warehousing, network/financial policy monitoring, clickstream analysis, spyware , order-book trading, … • Stream/CEP abstractions (window semantics, “finite” automata) often not appropriate. • Combine stream with historical data • E.g. order books are not windows
Given a DB update, do not recompute the view from scratch. • Perform only the work needed to update the view. • Compute delta query: • Delta queries work on less data; also slightly simpler. • But: we still need a classical query engine for processing the delta queries. Incremental View Maintenance CREATE MATERIALIZED VIEW empdep REFRESH FAST ON COMMIT AS SELECT empno, ename, dname FROM emp e, dept d WHERE e.deptno = d.deptno; [Roussopoulos, 1991; Yan and Larson, 1995; Colby et al, 1996; Kotidis and Roussopoulos, 2001; Zhou et al, 2007] (Example in Oracle)
Deltas of AGCA queries; closure AGCA is closed under taking deltas!
Degrees of deltas; high deltas are independent of the database
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK;
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += select sum(LI.P * O.XCH) from {<xOK, xCK, xD, xXCH>} O, LineItem LI where O.OK = LI.OK; +LI(yOK, yPK, yP) q[] += ...
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += select sum(LI.P * xXCH) from LineItem LI where xOK = LI.OK; +LI(yOK, yPK, yP) q[] += ...
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += xXCH * select sum(LI.P) from LineItem LI qO[xOK] where xOK = LI.OK; +LI(yOK, yPK, yP) q[] += ...
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += xXCH * qO[xOK]; foreach xOK: qO[xOK] = select sum(LI.P) from LineItem LI where xOK = LI.OK; +LI(yOK, yPK, yP) q[] += ...
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += xXCH * qO[xOK]; +LI(yOK, yPK, yP) foreach xOK: qO[xOK] += select sum(LI.P) from {<yOK, yPK, yP>} LI where xOK = LI.OK; +LI(yOK, yPK, yP) q[] += ...
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += xXCH * qO[xOK]; +LI(yOK, yPK, yP) foreach xOK: qO[xOK] += select yP where xOK = yOK; +LI(yOK, yPK, yP) q[] += ...
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += xXCH * qO[xOK]; +LI(yOK, yPK, yP) qO[yOK] += yP; +LI(yOK, yPK, yP) q[] += ...
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += xXCH * qO[xOK]; +LI(yOK, yPK, yP) qO[yOK] += yP; +LI(yOK, yPK, yP) q[] += select sum(LI.P * O.XCH) from Order O, {<yOK, yPK, yP>} LI where O.OK = LI.OK;
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += xXCH * qO[xOK]; +LI(yOK, yPK, yP) qO[yOK] += yP; +LI(yOK, yPK, yP) q[] += select sum( yP * O.XCH) from Order O where O.OK = yOK;
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += xXCH * qO[xOK]; +LI(yOK, yPK, yP) qO[yOK] += yP; +LI(yOK, yPK, yP) q[] += yP * select sum( O.XCH) from Order O where O.OK = yOK;
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += xXCH * qO[xOK]; +LI(yOK, yPK, yP) qO[yOK] += yP; +LI(yOK, yPK, yP) q[] += yP * qLI[yOK]; select sum( O.XCH) from Order O qLI[yOK] where O.OK = yOK;
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += xXCH * qO[xOK]; +LI(yOK, yPK, yP) qO[yOK] += yP; +LI(yOK, yPK, yP) q[] += yP * qLI[yOK]; +O(xOK, xCK, xD, xXCH) foreach yOK: qLI[yOK] += select sum( O.XCH) from {<xOK, xCK, xD, xXCH>} O where O.OK = yOK;
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += xXCH * qO[xOK]; +LI(yOK, yPK, yP) qO[yOK] += yP; +LI(yOK, yPK, yP) q[] += yP * qLI[yOK]; +O(xOK, xCK, xD, xXCH) foreach yOK: qLI[yOK] += select xXCH where xOK = yOK;
Compilation Example q[] = select sum(LI.P * O.XCH) from Order O, LineItem LI where O.OK = LI.OK; +O(xOK, xCK, xD, xXCH) q[] += xXCH * qO[xOK]; +LI(yOK, yPK, yP) qO[yOK] += yP; +LI(yOK, yPK, yP) q[] += yP * qLI[yOK]; +O(xOK, xCK, xD, xXCH) qLI[xOK] += xXCH; The triggers for incrementally maintaining all the maps run in constant time! No nonincremental algorithm can do that!
We are able to maintain queries with nested aggregates! • Query optimization problem interesting • Nesting, correlated variables, maintaining their domains. • See our VLDB 2012 paper. • Scala and C code generators (Scala is faster!) • Released at www.dbtoaster.org • Three parallel runtime systems: • Using Spark (bad), Storm/Squall (data flow parallelism), Cumulus (message passing) • Released soon. The DBToaster compiler: status
DBToaster Experiments Both classical view maintenance and compilation of main-memory operators 1-4 orders of magnitude slower.
Talk Overview Classic DBToaster The new backend: Scala & LMS Towards a compiler for WebDAMLog
DSL Compiler Construction • We will live in a fast-growing ecosystem of DSLs • Quickly developed and revised • We need tools for quickly creating programming environments. • Compilers, VMs, … • Lightweight Modular Staging (LMS)
Lightweight Modular Staging (LMS) • A compiler as a library. Easy to extend. • Leverages Scala virtualization. • Staging: build (JIT) compilers, interpreters, static analysis systems, etc. [Odersky, Rompf. Lightweight Modular Staging, CACM]
In the LMS Library • Duplicate expression elimination. • Dead code elimination • Loop fusion • Loop and recursion unrolling. • Code generators. • All this for general Scala (!) • Example: D/FFT: turn recursive def into circuit automatically. Delite (Stanford PPL/EPFL): built on top of LMS • further code generators (C/CUDA, …) • example DSLs (OptiML, OptiQl, OptiGraph, GreenMarl, …)
Current work on LMS • Automatic lifting of collection types, functions, classes • Fusion in the presence of side-effects • Global, cost-based optimization • Lightweight embeddings (YinYang) • Joint work of Oracle Labs, Odersky’s Lab (Scala), and us • PhD students V. Jovanovich, A. Shaikhha, and M. Dashti. • Staff researchers A. Noetzli, T. Coppey
javac, Graal, and LMS • Oracle is working on replacing the current javac by a completely new compiler. • Codename: Graal • Graal is based on LMS: it will be able to stage compilation in various ways. • Typically it will JIT to native and won’t need a VM. • Adaptivity as in the HotSpot VM.
DBToaster backend in LMS (prelim) Matches or beats our handwritten backend One order of magnitude less code (15kLOC vs 2kLOC)
We observed that Scala is faster than C++, why • JVM/Oracle Java 7: • Add : 0.215 • Clear : 0.001 • Aggr : 0.021 • Update : 0.072 Hashmap costs Boost/C++: Add : 0.309 Clear : 0.400 Aggr : 0.090 Update : 0.107
Talk Overview Classic DBToaster The new backend: Scala & LMS Towards a compiler for WebDAMLog
Domain Maintenance in DBToaster: a Datalog problem • Domain maintenance: deciding what to have and keep in the domains of the hashmaps. • Why important: window queries, where the window of relevant values moves: • e.g. sliding k-day median price of a stock. • IVM is about memoization, but sometimes it is important to remove data because it is costly to keep maintaining them. • “View caches”
Domain Maintenance in DBToaster: a Datalog problem What are the domains of these maps? • In general there are binding patterns on the domains. Example: Q[x,y] <- R(x), (S(y), x<y). Map m[x^b, y^f] = S(y), x<y. • The programs have been recursively decomposed by the DBToaster rewriting algo. • Determining the the domains of maps is essentially magic sets/QSQ, where we look for the supplementary relations!
IVM and datalog • We can rewrite datalog programs using our techniques rule by rule. No new ideas needed. • Seminaive evaluation is classical IVM on datalog rules. • This does everything seminaive eval does, and (much) more. • Can we do better? How? • If we want to support deletion, we get to bag semantics. • But we don’t know how to deal with bags and recursion.
IVM and datalog • Joint work with V. Kuncak: synthesis of incremental programs. • Search for programs achieving our goal, rather than syntactically rewriting them. • Already works for insertions into various (balanced) tree data structures (from invariants!). • Efficient runtime verification (in Java, for instance). • Incrementally maintaining assertions. • This is getting hot in PL.
Notions of recursion • Datalog recursion – additional querying power • E.g. computing graph queries -- Graphlog, Socialite • Modeling Dynamics – database in each iteration corresponds to a different time point/state. • Bloom, WebDAMLog: capturing distributed computation, nw protocols • Triggers that call triggers? cf. Active databases.
Challenges • Triggers that call triggers: which notion of consistency, if you want to capture both classical recursion for its expressiveness and distributed dynamic computations? • Aggregates(bag semantics) and fixpoints. • You need bag semantics and aggregates for interesting analytics. • But then we generally don’t have fixpoints (see e.g. Val’s work) and it’s too hard to tell when we are not in trouble.
Challenges • We can think of it as a relational programming language and leave all worries about consistency and termination to the programmer. • It would be important to make guarantees though. • Argh I am running out of time making these sli
Use case: Algorithmic trading • High-frequency algorithmic trading is getting huge: • Q1/2009 in the US: 73% of all trades in equities. • ~15% annual profit margin across the industry! • Algo execution practices: • Extremely low latencies: stock exchanges make 3ms guarantees • Algos run in data centers close to exchanges • Algo development practices: • Algos and underlying models are constantly monitored, improved, and experimented with in large simulations • Analytics have to be performed at very high data rates, beyond DBMS or stream engines (expressiveness!). The fastest algo makes money. • Currently, algos written on a low level (C). Constant backtesting and improvement means coding bottleneck: Many programmers needed, software crisis.
A parameterized GMR is a function that takes an environment to a GMR. • Can be used to model binding passing in query calculi. • Conditions etc. do not have finite support/are not finite relations. Valuations of variables can be passed from the left. • Still a ring! • + and * strictly generalize monoid rings (GMRs) and thus union and join, resp. Parameterized GMRs