110 likes | 135 Views
Explore the memory hierarchy, caching, cache parameters, virtual memory, and paging in CS/COE 1541 (term 2174). Learn how different technologies work together to optimize memory performance.
E N D
Memory:Putting it all together CS/COE 1541 (term 2174) Jarrett Billingsley
Class Announcements • HW4 tonight • I had an exam to write by 1:00 today • But that means I have no lecture to make for tomorrow so I can make the homework • But I’m really sleepy cause I only got 6 hours of sleep • Exam study guide + practice problems by Friday • Also, today is mostly a review • After the exam, you’ll have a week of break – no HW, no project CS/COE 1541 term 2174
<wistful montage> <emotional piano music plays> CS/COE 1541 term 2174
Memory/storage technologies I’m using Durability to mean “how well it holds data after repeated use.” I’m using Reliability to mean “how likely it is to break.” CS/COE 1541 term 2174
The memory hierarchy • Since the technologies vary so much in characteristics, we use them in conjunction with one another to exploit their strengths and minimize their weaknesses. • SRAM in the CPU • DRAM as main memory • HDDs and Flash as long-term storage • We use different sizes at each level to reduce cost. • We use caching to exploit each technology’s performance. • Multi-level CPU caches… • Paging to use DRAM as a cache for long-term storage… • Write buffers in the hardware of long-term storage… CS/COE 1541 term 2174
The fundamental conceit of caching • Caching is a form of prediction. It takes advantage of two things: • Temporal locality: data accesses repeat over time. • Spatial locality: data accesses are clustered by location. • But caching introduces a lot of complications: • The cache is limited in size compared to the next storage level. • We have to choose what data to cache. • The cached data is temporary. • We have to make sure it’s written out when it changes. • The cached data is a copy. • We have to make sure it’s consistent with the next level. CS/COE 1541 term 2174
Choosing cache parameters • As you saw in your project, varying the cache parameters can have complex and unexpected effects on the same workloads. • Bigger cachesreduce capacity misses, but increase hit time. • Bigger blocksreduce compulsory misses, but increase bandwidth. • Associativityreduces conflict misses, but increases hit time. • LRUreduces miss rate, but increases hit time and complexity. • Write buffersamortize write bandwidth, but increase hit time and complexity. • Write-backreduces bandwidth, but increases hit time and complexity and is less dependable (more chance of data loss). • The particular needs of the problem at hand and the memory technologies being used will guide your choices of parameters. CS/COE 1541 term 2174
Virtual memory • To add to this complex layer cake… now we want to be able to support arbitrary mappings from VAs to PAs. • This requires support from both hardware and the OS. • Now we have to manage a cache for page table entries: the TLB. • And since we’re doing this virtual memory stuff to support multiprocessing… • We have to ensure the TLB/cache entries aren’t used by the wrong processes. • We can tag TLB/cache entries with process identifiers to avoid having to flush (invalidate) them. CS/COE 1541 term 2174
Paging • A useful extension of virtual memory. • Now the main memory is a cache, too. • As if we didn’t have enough of those. • Paging is largely constrained by the performance of the nonvolatile storage being used. • Most paging techniques in use today were designed for use with magnetic spinning disks, but Flash is becoming widespread. CS/COE 1541 term 2174
My god, it’s full of caches CS/COE 1541 term 2174
Caches Rule Everything Around Me HDD/SSD Virtual Address space CPU Virtual address from lw/sw instructions or from program counter (PC) page page Page offset Physical Memory VA Page# page TLB Physical address Block of a page PTE TLB miss Cache Page table PT walker PTE into TLB Page fault Oh no! OS Page Fault Handler Illustration courtesy Dr. Melhem