1 / 13

Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Chapter 5A: Exploiting the Memory Hierarchy, Part 1. Read Section 5.1: Introduction. Adapted from Slides by Prof. Mary Jane Irwin, Penn State University And Slides Supplied by the textbook publisher. Cache. Main Memory. Secondary Memory (Disk). Review: Major Components of a Computer.

kawena
Download Presentation

Chapter 5A: Exploiting the Memory Hierarchy, Part 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 5A: Exploiting theMemory Hierarchy, Part 1 Read Section 5.1: Introduction Adapted from Slides by Prof. Mary Jane Irwin, Penn State University And Slides Supplied by the textbook publisher

  2. Cache Main Memory Secondary Memory (Disk) Review: Major Components of a Computer Processor Devices Control Input Memory Datapath Output

  3. The “Memory Wall” • Processor vs DRAM speed disparity continues to grow Clocks per DRAM access Clocks per instruction • Good memory hierarchy (cache) design is increasingly important to overall performance

  4. The Memory Hierarchy Goal • Fact: Large memories are slow and fast memories are small • How do we create a memory that gives the illusion of being large, cheap and fast (most of the time)? • With hierarchy • With parallelism

  5. A Typical Memory Hierarchy • The memory system of a modern computer consists of a series of black boxes ranging from the fastest to the slowest. • Besides variation in speed, these boxes also vary in size (smallest to biggest) and cost. On-Chip Components Control Secondary Memory (Disk) Instr Cache Second Level Cache (SRAM) ITLB Main Memory (DRAM) Datapath Data Cache RegFile DTLB Speed (%cycles): ½’s 1’s 10’s 100’s 10,000’s Size (bytes): 100’s 10K’s M’s G’s T’s Cost: highest lowest

  6. Inclusive– what is in L1$ is a subset of what is in L2$ is a subset of what is in MM that is a subset of is in SM 4-8 bytes (word) 8-32 bytes (block) 1 to 4 blocks 1,024+ bytes (disk sector = page) Characteristics of the Memory Hierarchy Processor Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory (Relative) size of the memory at each level

  7. Why Does the Concept of a Memory Hierarchy Work? • What makes this kind of hierarchical memory organization work is the principle of locality of memory references generated by programs. • The principle of locality states that programs access a relatively small portion of the address space at any instant of time. • A memory hierarchy takes advantage of the principle of locality to present the user with as much memory as is available in the cheapesttechnology at the speed offered by the fastesttechnology

  8. Memory Hierarchy Technologies: the Cache • Caches use SRAM for speed and technology compatibility • Fast (typical access times of 0.5 to 2.5 nsec) • Low density (6 transistor cells), higher power, expensive ($2000 to $5000 per GB in 2008) • Static: content will last “forever” (as long as power is left on)

  9. Memory Hierarchy Technologies: Main Memory • Main memory uses DRAM for size (density) • Slower (typical access times of 50 to 70 nsec) • High density (1 transistor cells), lower power, cheaper ($20 to $75 per GB in 2008) • Dynamic: needs to be “refreshed” regularly (~ every 8 ms) • consumes1% to 2% of the active cycles of the DRAM • Addresses divided into 2 halves (row and column) • RASor Row Access Strobe triggering the row decoder • CAS or Column Access Strobe triggering the column selector

  10. The Memory Hierarchy: Why Does it Work? • Temporal Locality (locality in time) • If a memory location is referenced then it will tend to be referenced again soon  Keep most recently accessed data items closer to the processor • Spatial Locality (locality in space) • If a memory location is referenced, the locations with nearby addresses will tend to be referenced soon  Move blocks consisting of contiguous wordscloser to the processor

  11. The Memory Hierarchy: Terminology • Block (or line): the minimum unit of information that is present (or not present ) in a cache • Hit Rate: the fraction of memory accesses found in a level of the memory hierarchy • Hit Time: Time to access that level which consists of Time to access the block + Time to determine hit/miss

  12. The Memory Hierarchy: Terminology II • Miss Rate: the fraction of memory accesses not found in a level of the memory hierarchy  1 - (Hit Rate) • Miss Penalty: Time to replace a block in that level with the corresponding block from a lower level which consists of Time to access the block in the lower level + Time to transmit that block to the level that experienced the miss + Time to insert the block in that level + Time to pass the block to the requestor Hit Time <<< Miss Penalty

  13. How is the Hierarchy Managed? • registers  memory • by compiler (programmer?) • cache  main memory • by the cache controller hardware • main memory  disks • by the operating system (virtual memory) • virtual to physical address mapping assisted by the hardware (TLB) • by the programmer (files)

More Related