210 likes | 293 Views
Providing Differentiated Services from an Internet Server. Xiangping Chen and Prasant Mohapatra Dept. of Computer Science and Engineering Michigan State University IEEE International Conference on Computer Communications and Networks, 1999 Computer Architecture Lab. Yoon Hye Young.
E N D
Providing Differentiated Services from an Internet Server Xiangping Chen and Prasant Mohapatra Dept. of Computer Science and Engineering Michigan State University IEEE International Conference on Computer Communications and Networks, 1999 Computer Architecture Lab. Yoon Hye Young
Contents • Introduction • Distributed Server Model • Goal of experimental study • Simulation • Results • Conclusion
Introduction • Performance challenge of an Internet Server • Continuous increase of traffic Volume • Tens of millions of requests per day • Increased data processing. • Workload burst • higher request intensity in peak period
Introduction • Improving server response time • High performance server and broad network bandwidth • Load sharing and balancing • Distributed server system • Differentiated services • Prioritized processing
Distributed Server Model • Four logical components • SI : Initiator • Q : Scheduler • Si(i=1..N) : Task server • NS : Communication channel • Qos • Admission control, scheduling and efficient task assignment
Goal of the experimental study • Need for service differentiation • E-commerce • Continuous Media data delivery • The server needs to complement the QoS support of the NGI(Next Generation Internet) architecture. • Implementation could be at the application layer or at any lower layer. • Our goal here is to analyze the feasibility of the concept through a simplified model.
Simulation • An event driven simulator implementation • Generate workload from real trace file • ClarkNet
Simulation • Terms • Mean response time • the time between the acceptance of the request and the completion of the service • Slowdown response time service time
Effectiveness of Prioritized Scheduling • Results • Increase in server utilization, response time increase much faster under high utilization • High priority requests incur low delay even when the system approaches full utilization
High Priority Task Response Time • Results • With the increase in high priority ratio, the curve gets closer to the original non-prioritized system curve • That means, the margin of benefit obtained from differentiating service diminishes • That is, we need a proper high priority ratio.
Low Priority Task Response Time • Results • With the increase in the high priority ratio, the system utilization decrease • That is, low priority task is getting bad.
Task Assignment Schemes • Type of task assignment schemes • RR(Round-Robin) • SQF(Shortest_Queue_First) • E_SQF(Enhanced SQF) • Result • E_SQF is the best, but there is no significant difference from SQF under high load
Analysis • Objective • To derive a guideline for performance of high priority request • By calculating a high priority task’s waiting time
Analysis Notations used in the study
Analysis W1 Wh =W1+W2 W1= Ph* Xh+Pl*Xl W2= X*Nqh Nqh=Ah* Wh Wh =W1+W2 = X* Ah* Wh +W1 Wh = = 1- X* Ah Ph* X+Pl*X 1- X* Ah X 1- X* Ah The upper bound of W1 is X The mean waiting time for high priority’s task is depend on the high priority system utilization, X* Ah : proper high priority ratio is needed.
Conclusions • Service differentiation do improve the response time of high priority tasks significantly with comparatively low penalty to low priority tasks. • The upper bound of waiting time depends on the task arrival rate with equal or higher priority and the service time. • The combination of selective discard and priority queuing is necessary and sufficient to provide predictable services in an Internet server.