160 likes | 179 Views
This paper discusses approaches to achieve scalability in web services by varying resource consumption. It presents the Approach Selector prototype, experiments, and ongoing and future work in this area.
E N D
Varying Resource Consumption to achieve Scalable Web Services Lindsay Bradford Centre for Information Technology Innovation
Overview • Motivation • Approaches to Scalability • The Approach Selector Prototype • Experiments and Results • Ongoing and Future Work
Scalability Matters • Users expect “service on demand” from the Internet - Bhatti et.al • Dynamic web content: • On the increase – Barford et.al • Much harder to scale than static content – Stading et.al • Flash crowds a more common occurrence? • Consider: fully Internet enabled China mainland, • SOAP, WSDL, etc. make programmatic access and automation easier. Allows greater client request traffic.
Scalability - Dynamic Content: • Static Content: Bottleneck = Bandwidth • Dynamic Content: Bottleneck = CPU • Dynamic content caching techniques: • Active Query Caching -- Remote • Proxy applets, mobile code caching partial content at proxy server(s). • Data Update Propagation (DUP) -- Local and/or Remote • Cached dynamic content fragments re-evaluated once base source data changes. • HTML Macro Processing / WEAVE -- Remote • Protocol extension to tag static and dynamic parts of response. Static part can then be cached. Remote Cache server constructs complete response.
The Approach Selector (1): • Inspired by ``Multimedia’’ Quality Degradation (dropping to ``user acceptable’’ frames/second under load). • Alternative to Dynamic Content Caching. • Guiding heuristics: • Pick approach that will respond in human acceptable time frame (< 1 second) • Prefer more costly approach to less costly where possible. • Selector must balance approach generation time against target response time. • Limit scope to “Application Programmer” perspective. No modification of supporting technologies (App Servers, etc). What could developers do right now? What limits exist?
Why One Second? Why Degrading Approaches? • HCI lessons ignored on the Web: • Interest in and perceived quality of site is inversely proportional to response speed. Content makeup (text/graphics mix) has little effect. – Bhatti et.al.
The Approach Selector(3): • Unmodified Apache Tomcat 4.1.18 (75 Threads) • Approach Selector implemented as ``Servlet Filter’’ • Approach Selector Parameters: • time_limit = 800ms, • reactivation_threshold = 400ms • Approaches: • 4 instances of a floating-point division servlet, configured to 100, 500, 1000 and 3000 loops.
The Test Environment and Traffic Patterns: • Response `adequate’ if <= 1 second round-trip recorded at client. • Steady – Responsiveness to constant load • Bursty – Responsiveness to variable load
Bursty Pattern Results Baseline is 3000 loop approach. Unexpected high number of “heavy” approach attempts.
Steady Pattern Results Again, Unexpected high number of “heavier” approach attempts.
Conclusions: • Benefit of Approach Selection outweighs its overhead. • In both traffic patterns: • Returns more responses overall and significantly more within our one second target. • An unexpected high number of attempts at more costly approaches resulting in lower adequacy.
Ongoing Work: • Memory intensive servlet added • Similar results to CPU intensive servlet • Varied Thread Numbers • Traffic pattern and approach matter. • Varied Approach Selector Parameters • reactivation_threshold matters. time_limit no where near as much.
Future Work: • New I/O (Database simulation) servlet. • Servlet Engine Modification. • Servlet specification is too limiting. • Changing the Approach Selection Heuristic. • Automated approach generation off baseline. • Guidelines for automated service adaptation to request traffic.
Finish. • Questions? • Suggestions?