390 likes | 550 Views
Christopher Ré Joint work with the Hazy Team http:// www.cs.wisc.edu /hazy. Two Trends that Drive Hazy. Data in unprecedented number of formats. 2. Arms race for deeper understanding of data. Automated Statistical AND Manage Data RDBMS.
E N D
Christopher Ré Joint work with the Hazy Team http://www.cs.wisc.edu/hazy
Two Trends that Drive Hazy Data in unprecedented number of formats 2. Arms race for deeper understanding of data Automated Statistical AND Manage Data RDBMS Hazy integrates statistical techniques into an RDBMS Hazy Hypothesis: Handful of statistical operators capture a diverse set of applications.
Outline Three Application Areas for Hazy Drill Down: One Text Application Maintaining the Output of Classification Hazy Heads to the South Pole
Data constantly generated on the Web, Twitter, Blogs, and Facebook Extract and Classify sentiment about products, ad campaigns, and customer facing entities. Build tools to lower cost of analysis Statistical tools for extraction (e.g., CRFs) and classification (e.g., SVM). Performance and maintenance are data management challenges (DMC)
A physicist interpolates sensor readings and uses regression to more deeply understand their data DMC: Transform and maintain large volumes of sensor data and derived analysis Models that maps sequences of words to entities similar to some models that maps sensor readings to meaning
OCR and Speech A social scientist wants to extract the frequency of synonyms of English words in 18th century texts. Getting text is challenging! (statistical model of transcription errors) OCR & Speech Output of speech and OCR models similar to output of text labeling models DMC: Process large volumes of statistical data
Takeaway and Implications Statistical processing on large data enables wide variety of new applications. Hazy Hypothesis: Handful of statistical operators capture a diverse set of applications Key challenges are maintenance and performance (data management challenges)
Outline Three Application Areas for Hazy Drill Down: One Text Application Maintaining the Output of Classification Hazy Heads to the South Pole
The workflow requires several steps Classify publication by subject area Simplified workflow Paper references are crawled from the Web. Entities (Papers, Authors,…) are extracted and deduplicated. Each paper is classified by subject area DB is queried to render Web page. We still use the RDBMS for rendering, reports, etc. Hazy Evidence: We know names for these operators
Statistical Computations Specified Declaratively Tuples In. Tuples out. Hazy handles the statistical and traditional details. CREATE CLASSIFICATION VIEW V(id,label) ENTITIES FROM Papers EXAMPLES FROM Example Declarative SQL-Like Program Hazy/RDBMS
Hazy Helps with Corrections Paper 10 is not about query optimization -- it is about Information Extraction CREATE CLASSIFICATION VIEW V(id,label) ENTITIES FROM Papers EXAMPLES FROM Example Declarative SQL-Like Program Hazy/ RDBMS Easy as an INSERT: Update fixes that entry – and perhaps more – automatically.
Design Goals: Hazy should… • … look like SQL as much as possible • Ideal: application unaware of statistical techniques • Build on solutions for classical data management problems • … automate routine tasks • E.g., updates propagate through the system • Eventually, order operators for performance
Building Like Mad (Cows) • In PostgreSQL, we’ve built: • Classification: SVMs, Least Squares • Deduplication: synonym detection and coref • Factor Analysis: Low-Rank for NetFlix • Transducers for Sequences: Text, Audio, & OCR • Sophisticated Reasoning: Markov Logic Networks CREATE CLASSIFICATION VIEWV(id,label) ENTITIES FROM Paper(id, vec) EXAMPLES FROM EX_Paper (id,vec,label) USING SVM_L2 Developer declares task to Hazy using SQL-like views Model-based Views (Deshpande et al)
Reasoning by Analogy… Hazy Hypothesis: Handful of statistical operators capture a diverse set of applications.
Outline Three Application Areas for Hazy Drill Down: One Text Application Maintaining the Output of Classification Hazy Heads to the South Pole
Maintenance: What about corrections? Paper 10 is not about query optimization -- it is about Information Extraction CREATE CLASSIFICATION VIEW …. ENTITIES FROM PAPERS … EXAMPLES FROM Ex… Declarative SQL-Like Program Hazy/ RDBMS Easy as an INSERT: Update fixes that entry and others automatically! How does Hazy do this?
Background: Linear Models Label papers as DB Papers or Non-DB Papers 1 2 DB Papers w 1. Map each papers to Rd Non-DB Papers 2. Classify via plane 3 5 4 Experts: Logistic Regressions, SVMs, with/without Kernels. We leverage that they all perform inference the same way.
What happens on an update? Paper 3 is not a Database Paper! 1 2 DB Papers w Non-DB Papers 3 5 4 Oh no! The model (w) changes in wild and crazy ways! … well not really.
Intuition: Model Changes only Slightly Paper 3 is not a Database Paper! 1 2 DB Papers w’ w Non-DB Papers 3 5 4 That is, ||w – w’|| is small. It would be a waste of effort to relabel all 1, 4, 5. Can we just focus in on 2 and 3?
Hazy-Classify Cluster data by how likely to change classes 1 2 DB Papers hw 1 2 DB Papers w’ w only relabel here Non-DB Papers e4 3 e5 lw 5 4 Prop:There exist hw and lw functions of ||w – w’|| s.t. pid can change labels only if pid.eps in [lw,hw]
But the clustering may get out of date! Need to recluster periodically, how do we decide? Setup: Measure the time to recluster, call that C Set a timer T = 0 // intuition, the waste time. On each update:Alg from prev. slide. Add time to T. If T > C then recluster and set T = 0 Two claims that can made precise (theorems): Algorithm w/in a factor of 2 of optimal run time on any instance. Essentially optimal deterministic strategy. On DBLife, Citeseer, and ML datasets, Hazy is 10x+ faster than scan.
Other Features of Hazy-Classify • Hazy has a main-memory (MM) engine • Hazy-Classify supports Eager and Lazy Materialization Strategies • Improves either by an order of magnitude • An index that keeps in memory only elements likely to change classes • Allows 1% of data in memory with MM perf. • Enables active learning on 100Gb+ corpus.
IceCube Digital Optical Module (DOM)
Workflow of IceCube In Madison: Lots of data analysis. Via satellite: Interesting DOM readings At Pole: Algorithm says “Interesting!” In Ice: Detection occurs.
A Key Phase: Detecting Direction Here, Speed ≈ Quality Mathematical structure used to help track neutrinos is similar to labeling text/tracking/OCR!
Framework: Regression Problems Examples: 1. Neutrino Tracking: yi is a sensor reading 2. CRFs: yi is (token, label) 3. Netflix: yi is (user,movie,rating) Others tools also fit this model,e.g., SVMs Claim: General data analysis technique that is amenable to RDBMS processing
Background: Gradient Methods Gradient Methods: Iterative. 1. Take current x, 2. Derivate F wrtx, 3. Move in opposite direction F(x)
Incremental Gradient Methods Gradient Methods: Iterative. 1. Take current x, 2. Approximate derivative of F wrtx, 3. Move in opposite direction Can use a single data item to approximate
Incremental Gradient Methods (iGMs) Why use iGMs? Provably, iGMs converge to an optimal for many problems, but the real reason is: iGMs are fast. Technical connection: iGM processing ≈ a single tuple. RBDMS processing techniques apply No more complicated than a COUNT.
Hazy’s SQL version of Incremental Gradient -- (1) Curry (cache) the model, x SELECT cache_model($mid, $x); -- (2) Shuffle SELECT * INTO Shuffled FROM Data ORDER BY RANDOM(); -- (3) Execute the Gradient Steps SELECT GRAD($mid, y) FROM Shuffled -- (4) Write the model back to the model instance table UPDATE model_instance SET model=retrieve_model($mid) WHERE mid=$mid; Input: Data(id,y), GRAD Code generated automatically. Hazy Params: $mid and $model. Hazy does more optimization. This is a basic block.
More applications than a cube of ice! • Recommending Movies on Netflix • Experts: Low-rank Factorization. • Old SOTA : 4+ hours. • In RDBMS : 40 minutes. • Hazy-MM : 2 minutes. Same Quality Hazy-MM: We compile plans using g++ with a main memory engine (useful in IceCube). Prof. Ben Recht • Buzzwords: A novel parallel execution strategy for incremental gradient methods to optimize convex relaxations with constraints or proximal point operators.
A Common Backbone All of Hazy’s operators can have a weight learning or regression phase.
Futuring(I learned this term from my wife) • A main-memory engine for use in IceCube • We are releasing our algorithms to Mahout • We have some corporate partners who have given access to their data.
Incomplete Related Work Numeric methods to HadoopRicardo [Das et al 2010], Mahout [Ng et al]. Deduplication Coref Systems (UIUC), Dedupalog [ICDE09] Incremental Gradients Bottou, VowPal Rabbit (Y!), Pegasos Rules+Probability: MLNs [Richardson 05] PRMs [Koller 99] Declarative IE System T From IBM, DBLife [Doan et al], [Wang et al 2010] Model-Based Views: MauveDB [Deshpande et. al 05]
Conclusion Future of data management is in managing these less precise sources Hazy Hypothesis: Handful of statistical operators capture a diverse set of applications. Key challenges: performance and maintenance. Hazy attacks this.