1 / 21

L7: Performance

L7: Performance. Sam Madden madden@csail.mit.edu 6.033 Spring 2014. Overview. Technology fixes some performance problems Ride the technology curves if you can Some performance requirements require thinking An increase in load may need redesign: Batching Caching Scheduling Concurrency

dorcas
Download Presentation

L7: Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. L7: Performance Sam Madden madden@csail.mit.edu 6.033 Spring 2014

  2. Overview • Technology fixes some performance problems • Ride the technology curves if you can • Some performance requirements require thinking • An increase in loadmay need redesign: • Batching • Caching • Scheduling • Concurrency • Important numbers

  3. Moore’s law cost per transistor transistors per die “Cramming More Components Onto Integrated Circuits”, Electronics, April 1965

  4. Transistors/die doubles every ~18 months

  5. Technology Begets Technology • Tremendous investment in technology • Technology improvement is proportional to technology • Example: processors • Better processors  • Better layout tools  • Better processors • Mathematically: d(technology)/dt ≈ technology • technology ≈ et 5

  6. CPU performance The end of scaling?

  7. DRAM density Other Technology Follows Moore’s Law

  8. Disk: Price per GByte drops at ~30-35% per year

  9. Speed of light(0% per year) Incommensurate Scaling: Latency improves slowly Moore’s law (~70% per year) Improvement wrt year #1 DRAM access latency (~7% per year) Year #

  10. Approach to performance problems Client • Users complain the system is too slow • Measure the system to find bottleneck • Improve performance of bottleneck • Add hardware, change system design …. Internet Server Disk Client

  11. Heavily-loaded systems Latency Throughput • Once system busy, requests queue up bottleneck users users

  12. Case study: I/O bottleneck Disk Capacity # Transistors/CPU Memory Density Disk Seek Rate

  13. Hitachi 7K400

  14. Inside a disk 7200 rpm 8.3 ms per rotation

  15. Top view 88,283 tracks per platter 576 to 1170 sectors per track

  16. Simple server while (true) { wait for request data = read(name) response = compute(data) send response }

  17. I/O concurrency motivation • Simple server alternates between waiting for disk and computing: CPU: --- A --- --- B --- Disk: --- A --- --- B ---

  18. Use several threads to overlap I/O • Thread 1 works on A and Thread 2 works on B, keeping both CPU and disk busy: CPU: --- A --- --- B --- --- C --- …. Disk: --- A --- --- B --- … • Other benefit: fast requests can get ahead of slow requests • Downside: need locks, etc.!

  19. SSD (Flash) vs Disk

  20. Important numbers • Latency: • 0.00000001 ms: instruction time (1 ns) • 0.0001 ms: DRAM load (100 ns) • 0.1 ms: LAN network • 10 ms: random disk I/O • 25 ms: Internet east -> west coast • Throughput: • 10,000 MB/s: DRAM • 1,000 MB/s: LAN (or100 MB/s) • 100 MB/s: sequential disk (or 500 MB/s) • 1 MB/s: random disk I/O

  21. Summary • Technology fixes some performance problems • If performance problem is intrinsic: • Batching • Caching • Concurrency • Scheduling • Important numbers

More Related