280 likes | 488 Views
Using Probabilistic Models for Data Management in Acquisitional Environments. Sam Madden MIT CSAIL With Amol Deshpande (UMD), Carlos Guestrin (CMU). Distributed P2P. Berkeley Mote. Overview. Querying to monitor distributed systems Sensor-actuator networks Distributed databases. Issues
E N D
Using Probabilistic Models for Data Management in Acquisitional Environments Sam Madden MIT CSAIL With Amol Deshpande (UMD), Carlos Guestrin (CMU)
Distributed P2P Berkeley Mote Overview • Querying to monitor distributed systems • Sensor-actuator networks • Distributed databases • Issues • Missing, uncertain data • High acquisition, querying costs I’m not proposing a complete system! Probabilistic models provide a framework for dealing with all of these issues
Outline • Motivation • Probabilistic Models • New Queries and UI • Applications • Challenges and Concluding Remarks
Outline • Motivation • Probabilistic Models • New Queries and UI • Applications • Challenges and Concluding Remarks
Not your mother’s DBMS • Data doesn’t exist apriori • Acquisition in DBMS • Insufficient bandwidth • Selective observation • Sometimes, desired data is unavailable • Must be robust to loss Critical issue: given limited amount of noisy, lossy data, how can users interpret answers?
Source: Google.com Data is correlated • Temperature and voltage • Temperature and light • Temperature and humidity • Temperature and time of day • etc.
Outline • Motivation • Probabilistic Models • New Queries and UI • Applications • Challenges and Concluding Remarks
t1 t1 t0 t0 Transition Model Transition Model Solution: Probabilistic Models • Probability distribution (PDF) to estimate current state • Model captures correlation between variables • Directly answer queries from PDF • Incorporate new observations • Via probabilistic inference on model • Model the passage of time • Via transition model (e.g., Kalman filters) Models learned from historical data
Data gathering plan Condition on new observations Dt Architecture: Model-driven Sensornet DBMS posterior belief Probabilistic Model New Query Query • Advantages vs. “Best-Effort Query-Everything” • Observe fewer attributes • Exploit correlations • Reuse information between queries • Directly deal with missing data • Answer more complex (probabilistic) queries “SELECT nodeid,temp FROM sensors CONF .95 TO ± .5°”
Outline • Motivation • Probabilistic Models • New Queries and UI • Applications • Challenges and Concluding Remarks
Query SELECT nodeId, temp ± 0.5°C, conf(.95) FROM sensors WHERE nodeId in {1..8} System selects and observes subset of avail. nodes Observed nodes: {3,6,8} Query result New Types of Queries • Architecture enables efficient execution of many new queries • Approximate queries • “Tell me the temperature to within ± .5 degrees with 95% confidence?”
Probabilistic Query Optimization Problem • What observations will satisfy confidence bounds at minimum cost? • Must define cost metric and model • Sensornets: metric = power, cost = sensing + comm • Decide if a set of observations satisfies bounds • Choose a search strategy
Query Predicate If we observe S=s : Ri(s) = max{ P(Xi[a,b] | s ), 1-P(Xi[a,b] | s )} Value of S is unknown: Ri(S) = P(s) Ri(s) ds Optimization problem: Choosing observation plan Is a subset S sufficient? P(Xi[a,b]) > 1- reward Pick your favorite search strategy
User Central Model 10 20 30 10 20 30 10 20 30 10 20 30 Local Models 10 10 20 20 30 30 10 20 30 10 20 30 10 20 30 10 20 30 More New Queries • Outlier queries • “Report temperature readings that have a 1% or less chance of occurring.” • Extend architecture with local filters: Update Models Transmit Outliers Issues: Bias Inefficiency
Even More New Queries • Prediction queries • “What is the expected temperature at 5PM today, given that it is very humid?” • Influence queries • “What percentage of network traffic at site A is explained by traffic at sites B and C?” Queries could not be answered without a model!
Load vs. Time UI Issues • How to make probability “intuitive”? • How to allow users to express queries? • Issues • Query Language • UI
Outline • Motivation • Probabilistic Models • New Queries and UI • Applications • Challenges and Concluding Remarks
Applications • Sensor-based Building Monitoring • Often battery powered • 100s-1000s of nodes • Example: HVAC Control • Tolerant of approximate answers • Reduction in energy significant
App: Distributed System Monitoring • Goal: detect/predict overload, reprovision • Many metrics that may indicate overload • Disk usage, CPU load, network load, network latency, active queries, etc. • Cost to observe • Problem: What metrics foreshadow overload? • Soln: • Train on data labeled w/ overload status • Choose obs. plan that predicts label
Other Apps • Stream load shedding • Sensor network intrusion detection • Database statistics • See paper!
Outline • Motivation • Probabilistic Models • New Queries and UI • Applications • Challenges and Concluding Remarks
Query Query Extension, Not Restriction • Possible to have many views of same data • Different models • Base data Integration Layer Model 1 Model 2 Discrete (Histograms) Gaussians Acquisition Layer + Tabular Data System State • Number of architectural challenges
Every rose… • Models can can fail to capture details • Models can be wrong • Models can be expensive to build • Models can be expensive to maintain Paper suggests a number of known techniques from the ML community.
Whither hence? • See the paper for technical details • See other work • Probabilistic data models • Outlier and change detection • Generalize these ideas to: • New models • Non-numeric types • New environments, queries • Make some AI and stats friends
Conclusions • Emerging data management opportunities: • Ad-hoc networks of tiny devices • Large scale distributed system monitoring • These environments are: • Acquisitional • Loss-prone • Probabilistic models are an essential tool • Tolerate missing data • Answer sophisticated new queries • Framework for efficient acquisitional execution
App: Value-Based Load Shedding • User prioritizes some output values over others • May have to shed load • Issue: what inputs correspond to desired outputs? • Esp. hard for aggregates, UDFs • Can learn a probabilistic model that gives P(output value | input tuple) • Requires source tuple references on result tuples • Use this model to decide which tuples to drop