270 likes | 288 Views
CS 501: Software Engineering. Lecture 22 Performance of Computer Systems. Administration. Final presentations May 5, 6 and 7. Sign up now. If these times are not possible, you can ask for an earlier time. Final reports Due 5:00 p.m. May 13. Performance of Computer Systems.
E N D
CS 501: Software Engineering Lecture 22 Performance of Computer Systems
Administration Final presentations May 5, 6 and 7. Sign up now. If these times are not possible, you can ask for an earlier time. Final reports Due 5:00 p.m. May 13.
Performance of Computer Systems In most computer systems The cost of people is much greater than the cost of hardware Yet performance is important Future loads may be much greater than predicted A single bottleneck can slow down an entire system
Predicting Performance Change:Moore's Law Original version: The density of transistors in an integrated circuit will double every year. (Gordon Moore, Intel, 1965) Current version: Cost/performance of silicon chips doubles every 18 months.
Moore's Law: Rules of Thumb Planning assumptions: Every year: cost/performance of silicon chips improves 25% cost/performance of magnetic media improves 30% 10 years = 100:1 20 years = 10,000:1
Moore's Law and System Design Design system: 2004 Production use: 2007 Withdrawn from production: 2017 Processor speeds: 1 1.9 28 Memory sizes: 1 1.9 28 Disk capacity: 1 2.2 51 System cost: 1 0.4 0.01
Moore's Law Example Will this be a typical personal computer? 2004 2017 Processor 1 GHz 25 GHz Memory 512 MB 14 GB Disc 50 GB 2 TB Network 100 Mb/s 1 Gb/s Surely there will be some fundamental changes in how this this power is packaged and used.
Parkinson's Law Original: Work expands to fill the time available. (C. Northcote Parkinson) Planning assumptions: (a) Demand will expand to use all the hardware available. (b) Low prices will create new demands. (c) Your software will be used on equipment that you have not envisioned.
False Assumptions from the Past Unix file system will never exceed 2 Gbytes (232 bytes). AppleTalk networks will never have more than 256 hosts (28 bits). GPS software will not last 1024 weeks. Nobody at Dartmouth will ever earn more than $10,000 per month. etc., etc., .....
Moore's Law and the Long Term What level? 2004 1965
Moore's Law and the Long Term What level? Within your working life? 2004? When? 1965
Predicting System Performance • Mathematical models • Simulation • Direct measurement • Rules of thumb All require detailed understanding of the interaction between software and hardware systems.
Understand the Interactions between Hardware and Software Example: execution of http://www.cs.cornell.edu/ domain name service TCP connection HTTP get Client Servers
Understand the Interactions between Hardware and Software :Thread :Toolkit :ComponentPeer target:HelloWorld run run callbackLoop handleExpose paint
Decompress Stream audio Stream video Understand Interactions between Hardware and Software start state fork join stop state
Look for Bottlenecks Possible areas of congestion Network load Database access how many joins to build a record? Locks and sequential processing CPU performance is rarely a factor, except in mathematical algorithms. More likely bottlenecks are: Reading data from disk (including paging) Moving data from memory to CPU
mean service time mean inter-arrival time utilization = Look for Bottlenecks: Utilization Utilization is the proportion of the capacity of a service that is used on average. When the utilization of any hardware component exceeds 30%, be prepared for congestion. Peak loads and temporary increases in demand can be much greater than the average.
Mathematical Models: Queues arrive wait in line service depart Single server queue
Queues service arrive wait in line depart Multi-server queue
Mathematical Models Queueing theory Good estimates of congestion can be made for single-server queues with: • arrivals that are independent, random events (Poisson process) • service times that follow families of distributions (e.g., negative exponential, gamma) Many of the results can be extended to multi-server queues.
Behavior of Queues: Utilization mean delay utilization 0 1
Fixing Bad Performance If a system performs badly, begin by identifying the cause: • Instrumentation. Add timers to the code. Often this will reveal that the delays are centered in one specific part of the system. • Test loads. Run the system with varying loads, e.g., high transaction rates, large input files, many users, etc. This may reveal the characteristics of when the system runs badly. • Design and code reviews. Have a team review the system design and suspect sections of code for performance problems. This may reveal an algorithm that is running very slowly, e.g., a sort, locking procedure, etc. Fix the underlying cause or the problem will return!
Techniques for Eliminating Bottlenecks Serial and Parallel Processing Single thread v. multi-thread e.g., Unix fork Granularity of locks on data e.g., record locking Network congestion e.g., back-off algorithms
Measurements on Operational Systems • Benchmarks: Run system on standard problem sets, sample inputs, or a simulated load on the system. • Instrumentation: Clock specific events. If you have any doubt about the performance of part of a system, experiment with a simulated load.
Techniques: Simulation Model the system as set of states and events advance simulated time determine which events occurred update state and event list repeat Discrete time simulation: Time is advanced in fixed steps (e.g., 1 millisecond) Next event simulation: Time is advanced to next event Events can be simulated by random variables (e.g., arrival of next customer, completion of disk latency)
Timescale Operations per second CPU instruction: 1,000,000,000 Disk latency: 60 read: 25,000,000 bytes Network LAN: 10,000,000 bytes dial-up modem: 6,000 bytes
Case Study: Performance of Disk Array When many transaction use a disk array, each transaction must: wait for specific disk platter wait for I/O channel signal to move heads on disk platter wait for I/O channel pause for disk rotation read data Close agreement between: results from queuing theory, simulation, and direct measurement (within 15%).