120 likes | 211 Views
Evaluating System Performance in Gigabit Networks. King Fahd University of Petroleum and Minerals (KFUPM) INFORMATION AND COMPUTER SCIENCE DEPARTMENT Dr. K. Salah IEEE LCN 2003 Bonn, Germany. Agenda. Introduction Receive-livelock Phenomenon Contribution Modeling and Analysis
E N D
Evaluating System Performance in Gigabit Networks King Fahd University of Petroleum and Minerals (KFUPM) INFORMATION AND COMPUTER SCIENCE DEPARTMENT Dr. K. Salah IEEE LCN 2003 Bonn, Germany
Agenda • Introduction • Receive-livelock Phenomenon • Contribution • Modeling and Analysis • Analysis Verification and Validation • Numerical Examples • Conclusions and Future Work • Q&A
Introduction • High-Speed Network devices are widely deployed • Gigabit Ethernet Technology supports 1 Gb/s and 10 Gb/s raw bandwidth • Network performance has been shifted to servers and end hosts • The high bandwidth increase can negatively impact the OS performance due to the interrupt overhead caused by the incoming gigabit traffic. • As interrupt handling has more priority over other processing, this leads to receive-livelock phenomenon
Packet Arrival Rate - Slow Applications Protocol Stack Network traffic Host system
Packet Arrival Rate - Fast Applications Protocol Stack Network traffic X X Host system
Receive-livelock Phenomenon Ideal Throughput MLFRR Acceptable Livelock Offered load (Source: K. K. Ramakrishnan,1993)
Our Contribution • Many solutions exist for resolving receive livelock and minimizing interrupt overhead • No analytical study exists to study the impact of interrupt overhead on system performance and model receive livelock • As opposed to simulation and prototyping, analytical study provides a quick and easy way of predicting system behavior and selecting proper design parameters for CPU, OS, and NIC • Buffer size in the NIC • Processing power • Application and Kernel scheduling • Interrupt handling • Using DMA or PIO
Modeling and Analysis • Analytical models are based on queueing theory and Markov Process • Arrival Rate / Service Rate > 1 • Cannot use Priority Queues with Preemption • Interrupt Overhead is not counted for every Packet Arrival • Use of Mean Effective Service Rate • Rate at which packets get processed by the kernel’s protocol with no interrupt disruption • Service Rate * (% CPU availability) • Three analytical models • An analytical Ideal System where interrupt overhead is ignored • Two analytical models • PIO – NICs with no DMA engines • The copying of arrived packets from NIC buffer to host kernel memory is performed by the CPU. • ISR handling is long • DMA – NICs with DMA engines • Copying of arrived packets is performed by DMA engines. • ISR handling is very short
Analysis Verification and Validation • Special cases where verified • Verified by Simulation • Reported experimental findings show that our analytical models are valid and give a good approximation.
Conclusions and Future Work • Analytical models give good approximation • Some difference is due to arrival traffic distribution • At light load, use of DMA and PIO yields similar system throughput • With DMA, receive livelock occurs very late, i.e. at extremely high arrival rate • Analysis can be used to study CPU Availability, System Delay, Queue size, etc. • As a further study, • Effect of bursty traffic instead of Poisson • Performance of proposed solutions to minimize interrupt overhead • Interrupt Coalescing • Polling • Disable and Enable Interrupts • Jumbo Frames