680 likes | 805 Views
Balancing Interactive Performance and Budgeted Resources in Mobile Applications. Brett Higgins. The Most Precious Computational Resource. “ The most precious resource in a computer system is no longer its processor, memory, disk, or network, but rather human attention .”
E N D
Balancing Interactive Performance and Budgeted Resources in Mobile Applications Brett Higgins
The Most Precious Computational Resource “The most precious resource in a computer system is no longer its processor, memory, disk, or network, but rather human attention.” – GollumGarlan et al. $$$ 3 GB (12 years ago) Brett Higgins
User attention vs. energy/data $$$ 3 GB Brett Higgins
Balancing these tradeoffs is difficult! • Mobile applications must: • Select the right network for the right task • Understand complex interactions between: • Type of network technology • Bandwidth/latency • Timing/scheduling of network activity • Performance • Energy/data resource usage • Cope with uncertainty in predictions Opportunity: system support Brett Higgins
Spending Principles • Spend resources effectively. • Live within your means. • Use it or lose it. Brett Higgins
Thesis Statement By: • Providing abstractions to simplify multinetworking, • Tailoring network use to application needs, and • Spending resourcesto purchase reductions in delay, Mobile systems can help applications significantly improve user-visible performance without exhausting limited energy & data resources. Brett Higgins
Roadmap • Introduction • Application-aware multinetworking • Intentional Networking • Purchasing performance with limited resources • Informed Mobile Prefetching • Coping with cloudy predictions • Meatballs • Conclusion Brett Higgins
The Challenge Diversity of networks Diversity of behavior • Email • Fetch messages • YouTube • Upload video Match traffic to available networks Brett Higgins
Current approaches: two extremes All details hidden All details exposed Result: mismatched Result: hard for traffic applications Please insert packets Brett Higgins
Solution: Intentional Networking • System measures available networks • Applications describe their traffic • System matches traffic to networks Brett Higgins
Abstraction: Multi-socket • Multi-socket: virtual connection • Analogue: task scheduler • Measures performance of each alternative • Encapsulates transient network failure Client Server Brett Higgins
Abstraction: Label • Qualitative description of network traffic • Interactivity: foreground vs. background • Analogue: task priority Client Server Brett Higgins
Evaluation Results: Email • Vehicular network trace - Ypsilanti, MI 3% 7x Brett Higgins
Intentional Networking: Summary • Multinetworking state of the art • All details hidden: far from optimal • All details exposed: far too complex • Solution: application-aware system support • Small amount of information from application • Significant reduction in user-visible delay • Only minimal background throughput overhead • Can build additional services atop this Brett Higgins
Roadmap • Introduction • Application-aware multinetworking • Intentional Networking • Purchasing performance with limited resources • Informed Mobile Prefetching • Coping with cloudy predictions • Meatballs • Conclusion Brett Higgins
Mobile networks can be slow Without prefetching With prefetching $#&*!! No need for profanity, Brett. Data fetching is slow User: angry Fetch time hidden User: happy Brett Higgins
Mobile prefetching is complex • Lots of challenges to overcome • How do I balance performance, energy, and cellular data? • Should I prefetchnow or later? • Am I prefetching data that the user actually wants? • Does my prefetchinginterfere with interactive traffic? • How do I use cellular networks efficiently? • But the potential benefits are large! Brett Higgins
Who should deal with the complexity? Users? • Developers? Brett Higgins
What apps end up doing • The Data Hog • The Data Miser Brett Higgins
Informed Mobile Prefetching • Prefetching as a system service • Handles complexity on behalf of users/apps • Apps specify what and how to prefetch • System decides when to prefetch Application prefetch() IMP Brett Higgins
Informed Mobile Prefetching • Tackles the challenges of mobile prefetching • Balances multiple resources via cost-benefit analysis • Estimates future cost, decides whether to defer • Tracks accuracy of prefetch hints • Keeps prefetching from interfering with interactive traffic • Considers batchingprefetches on cellular networks Brett Higgins
Balancing multiple resources Cellular data Performance Energy $$$ 3 GB Benefit Cost Brett Higgins
Balancing multiple resources • Performance (user time saved) • Future demand fetch time • Network bandwidth/latency • Battery energy (spend or save) • Energy spent sending/receiving data • Network bandwidth/latency • Wireless radio power models (powertutor.org) • Cellular data (spend or save) • Monthly allotment • Straightforward to track $$$ 3 GB Brett Higgins
Weighing benefit and cost Joules Bytes • IMP maintains exchange rates • One value for each resource • Expresses importance of resource • Combine costs in common currency • Meaningful comparison to benefit • Adjust over time via feedback • Goal-directed adaptation Seconds Seconds Brett Higgins
Users don’t always want what apps ask for • Some messages may not be read • Low-priority • Spam • Should consider the accuracy of hints • Don’t require the app to specify it • Just learn it through the API • App tells IMP when it uses data • (or decides not to use the data) • IMP tracks accuracy over time Brett Higgins
Evaluation Results: Email 5 4 3 2 1 0 3G data usage Average email fetch time Energy usage 8 7 6 5 4 3 2 1 0 500 400 300 200 100 0 IMP meets all resource goals Energy (J) 2-8x Optimal (100% hits) 3G data (MB) Time (seconds) Less energy than all others (including WiFi-only!) ~300ms 2x Only WiFi-only used less 3G data (but…) Budget marker Brett Higgins
IMP: Summary • Mobile prefetching is a complex decision • Applications choose simple, suboptimal strategies • Powerful mechanism: purchase performance w/ resources • Prefetching as a system service • Application provides needed information • System manages tradeoffs (spending vs. performance) • Significant reduction in user-visible delay • Meets specified resource budgets • What about other performance/resource tradeoffs? Brett Higgins
Roadmap • Introduction • Application-aware multinetworking • Purchasing performance with limited resources • Coping with cloudy predictions • Motivation • Uncertainty-aware decision-making (Meatballs) • Decision methods • Reevaluation from new information • Evaluation • Conclusion Brett Higgins
Mobile Apps & Prediction • Mobile apps rely on predictions of: • Networks (bandwidth, latency, availability) • Computation time • These predictions are often wrong • Networks are highly variable • Load changes quickly • Wrong prediction causes user-visible delay • But mobile apps treat them as oracles! Brett Higgins
Example: code offload Likelihood of 100 sec response time 0.1% 0.0001% Brett Higgins
Example: code offload Elapsed time Expected response time Expected response time (redundant) 0 sec 10.09 sec 10.00009 sec Brett Higgins
Example: code offload Elapsed time Expected response time Expected response time (redundant) 9 sec 1.09 sec 1.00909 sec Brett Higgins
Example: code offload Elapsed time Expected response time Expected response time (redundant) 11 sec 89 sec 10.09 sec Conditional expectation Brett Higgins
Spending for a good cause • It’s okay to spend extra resources! • …if we think it will benefit the user. • Spend resources to cope with uncertainty • …via redundant operation • Quantify uncertainty, use it to make decisions • Benefit (time saved) • Cost (energy + data) • ✖ Brett Higgins
Meatballs • A library for uncertainty-aware decision-making • Application specifies its strategies • Different means of accomplishing a single task • Functions to estimate time, energy, cellular data usage • Application provides predictions, measurements • Allows library to capture error & quantify uncertainty • Library helps application choose the best strategy • Hides complexity of decision mechanism • Balances cost & benefit of redundancy Brett Higgins
Roadmap • Introduction • Application-aware multinetworking • Purchasing performance with limited resources • Coping with cloudy predictions • Motivation • Uncertainty-aware decision-making (Meatballs) • Decision methods • Reevaluation from new information • Evaluation • Conclusion Brett Higgins
Deciding to operate redundantly • Benefit of redundancy: time savings • Cost of redundancy: additional resources • Benefit and cost are expectations • Consider predictions as distributions, not spot values • Approaches: • Empirical error tracking • Error bounds • Bayesian estimation Brett Higgins
Empirical error tracking • Compute error upon new measurement • Weighted sum over joint error distribution • For multiple networks: • Time: min across all networks • Cost: sum across all networks observed predicted ε(BP1) ε(BP2) error = .5 .6 .7 .8 .9 1 1.1 1.2 1.3 1.4 .5 .6 .7 .8 .9 1 1.1 1.2 1.3 1.4 Brett Higgins
Error bounds • Range + p(next value is in range) • Student’s-tprediction interval • Calculate bound on time savings of redundancy Network bandwidth Time to send 10Mb 9876543210 9876543210 Predictor says: Use network 2 Bandwidth (Mbps) Time (seconds) Max time savings from redundancy Brett Higgins BP1 BP2 T1 T2
Error bounds • Calculate bound on net gain of redundancy max(benefit) – min(cost) = max(net gain) • Use redundancy if max(net gain) > 0 Energy to send 10Mb Min energy w/ redundancy 4 3 2 1 0 Network bandwidth Time to send 10Mb 9876543210 9876543210 Predictor says: Use network 2 Energy (J) Bandwidth (Mbps) Time (seconds) Max time savings from redundancy Eboth E1 E2 Brett Higgins BP1 BP2 T1 T2
Bayesian Estimation • Basic idea: • Given a priorbelief about the world, • and some new evidence, • update our beliefs to account for the evidence, • AKA obtaining posterior distribution • using the likelihoodof the evidence • Via Bayes’ Theorem: posterior = likelihood * prior p(evidence) Normalization factor; ensures posterior sums to 1 Brett Higgins
Bayesian Estimation • Example: “will it rain tomorrow?” • Prior: historical rain frequency • Evidence: weather forecast (simple yes/no) • Posterior: believed rain probability given forecast • Likelihood: • When it will rain, how often has the forecast agreed? • When it won’t rain, how often has the forecast agreed? • Via Bayes’ Theorem: posterior = likelihood * prior p(evidence) Normalization factor; ensures posterior sums to 1 Brett Higgins
Bayesian Estimation • Applied to Intentional Networking: • Prior: bandwidth measurements • Evidence: bandwidth prediction + implied decision • Posterior: new belief about bandwidths • Likelihood: • When network 1 wins, how often has the prediction agreed? • When network 2 wins, how often has the prediction agreed? • Via Bayes’ Theorem: posterior = likelihood * prior p(evidence) Normalization factor; ensures posterior sums to 1 Brett Higgins
Decision methods: summary • Empirical error tracking • Captures error distribution accurately • Computationally intensive • Error bounds • Computationally cheap • Prone to overestimating uncertainty • Bayesian • Computationally cheap(er than brute-force) • Appears more reliant on history Brett Higgins
Reevaluation: conditional distributions Decision Elapsed Time One server Two servers 0 10 … Brett Higgins
Evaluation: methodology • Network trace replay, energy & data measurement • Same as IMP • Metric: weighted cost function • time + cenergy * energy + cdata * data Brett Higgins
Evaluation: IntNW, walking trace Simple Meatballs Low-resource strategies improve Error bounds leans towards redundancy 2x Weighted cost (norm.) 24% Meatballs matches the best strategy Brett Higgins
Evaluation: PocketSphinx, server load Simple Meatballs Error bounds leans towards redundancy 23% Weighted cost (norm.) Meatballs matches the best strategy Brett Higgins