120 likes | 133 Views
Lecture 4.1. Memory Hierarchy: Introduction. Learning Objectives. Outline the memory hierarchy Explain the principle of locality Spatial locality Temporal locality Understand the abstract view of cache in computer organization Cache is transparent to processor
E N D
Lecture 4.1 Memory Hierarchy: Introduction
Learning Objectives • Outline the memory hierarchy • Explain the principle of locality • Spatial locality • Temporal locality • Understand the abstract view of cache in computer organization • Cache is transparent to processor • Calculate hit rate and miss rate 2
Coverage • Textbook Chapter 5.1 3
µProc 55%/year (2X/1.5yr) DRAM 7%/year (2X/10yrs) Processor-Memory Performance Gap “Moore’s Law” Processor-Memory Performance Gap(grows 50%/year)
The “Memory Wall” • Processor vs DRAM speed disparity continues to grow Clocks per DRAM access Clocks per instruction • Good memory hierarchy (cache) design is increasingly important to overall performance
Memory Technology • Static RAM (SRAM) • 0.5ns – 2.5ns, $2000 – $5000 per GB • Dynamic RAM (DRAM) • 50ns – 70ns, $20 – $75 per GB • Magnetic disk • 5ms – 20ms, $0.20 – $2 per GB • Ideal memory • Access time of SRAM • Capacity and cost/GB of disk 6
Memory Hierarchy • The Pyramid 7
A Typical Memory Hierarchy • Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology while at the speed offered by the fastest technology On-Chip Components Control Secondary Memory (Disk) Instr Cache Second Level Cache (SRAM) ITLB Main Memory (DRAM) Datapath Data Cache RegFile DTLB Speed (cycles): ½’s 1’s 10’s 100’s 10,000’s Size (bytes): 100’s 10K’s M’s G’s T’s Cost: highest lowest
Inside the Processor • AMD Barcelona: 4 processor cores
Principle of Locality • Programs access a small proportion of their address space at any time • Temporal locality (locality in time) • Items accessed recently are likely to be accessed again soon • e.g., instructions in a loop, induction variables • Keep most recently accessed items in the cache • Spatial locality (locality in space) • Items near those accessed recently are likely to be accessed soon • E.g., sequential instruction access, array data • Move blocks consisting of contiguous words closer to the processor 10
Taking Advantage of Locality • Memory hierarchy • Store everything on disk • Copy recently accessed (and nearby) items from disk to smaller DRAM memory • Main memory • Copy more recently accessed (and nearby) items from DRAM to smaller SRAM memory • Cache memory attached to CPU Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 11
Memory Hierarchy Levels • Block (aka cache line) • Unit of copying • May be multiple words • If accessed data is present in upper level • Hit: access satisfied by upper level • Hit ratio: hits/accesses • Hit time: • If accessed data is absent • Miss: data not in upper level • Miss ratio: misses/accesses= 1 – hit ratio • Miss penalty: Time to access the block + Time to determine hit/miss Time to access the block in the lower level + Time to transmit that block to the level that experienced the miss + Time to insert the block in that level + Time to pass the block to the requestor Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 12