200 likes | 227 Views
Explore the impact of Moore's Law on computer system performance, design strategies, and predicting system behavior using mathematical models, simulations, and measurements. Learn about queueing theory, simulation modeling, and system performance analysis in this comprehensive lecture. Understand the influence of Moore's Law and other laws like Parkinson's Law on software development and system administration practices.
E N D
CS 501: Software EngineeringFall 2000 Lecture 19 Performance of Computer Systems
Moore's Law Original version: The density of transistors in an integrated circuit will double every year. (Gordon Moore, Intel, 1965) Current version: Cost/performance of silicon chips doubles every 18 months.
Moore's Law and System Design Design system: 2000 Production use: 2003 Withdrawn from production: 2013 Processor speeds: 1 1.9 28 Memory sizes: 1 1.9 28 Disk capacity: 1 2.2 51 System cost: 1 0.4 0.01
Moore's Law: Rules of Thumb Planning assumptions: Every year: cost/performance of silicon chips improves 25% cost/performance of magnetic media improves 30% 10 years = 100:1 20 years = 10,000:1
Parkinson's Law Original: Work expands to fill the time available. (C. Northcote Parkinson) Planning assumptions: (a) Demand will expand to use all the hardware available. (b) Low prices will create new demands. (c) Your software will be used on equipment that you have not envisioned.
False Assumptions Unix file system will never exceed 2 Gbytes (232 bytes). AppleTalk networks will never have more than 256 hosts (28 bits). GPS software will not last 1024 weeks. Nobody at Dartmouth will ever earn more than $10,000 per month. etc., etc., .....
Moore's Law and the Long Term What level? Within your working life? 2000? When? 1965
Predicting System Performance • Mathematical models • Simulation • Direct measurement All require detailed understanding of the interaction between software and systems.
Queues arrive wait in line service depart Single server queue
Queues service arrive wait in line depart Multi-server queue
Mathematical Models Queueing theory Good estimates of congestion can be made for single-server queues with: • arrivals that are independent, random events (Poisson process) • service times that follow families of distributions (e.g., negative exponential, gamma) Many of the results can be extended to multi-server queues.
mean service time mean inter-arrival time utilization = Utilization: Rule of Thumb When the utilization of any system component exceeds 30%, be prepared for congestion.
Behavior of Queues: Utilization mean delay utilization 0 1
Simulation Model the system as set of states and events advance simulated time determine which events occurred update state and event list repeat Discrete time simulation: Time is advanced in fixed steps (e.g., 1 millisecond) Next event simulation: Time is advanced to next event Events can be simulated by random variables (e.g., arrival of next customer, completion of disk latency)
Timescale Operations per second CPU instruction: 400,000,000 Disk latency: 60 read: 25,000,000 bytes Network LAN: 10,000,000 bytes dial-up modem: 6,000 bytes
Measurements on Operational Systems • Benchmarks: Run system on standard problem sets, sample inputs, or a simulated load on the system. • Instrumentation: Clock specific events.
Serial and Parallel Processing Single thread v. multi-thread e.g., Unix fork Granularity of locks on data e.g., record locking Network congestion e.g., back-off algorithms
Example: Performance of Disk Array Each transaction must: wait for specific disk platter wait for I/O channel signal to move heads on disk platter wait for I/O channel pause for disk rotation read data Close agreement between: results from queueing theory, simulation, and direct measurement (within 15%).
The Software Process Requirements Definition System and Software design Programming and Unit Testing Integration and System Testing Operation and Maintenance