700 likes | 804 Views
CS147 Lecture 14. Virtual Memory. Prof. Sin-Min Lee Department of Computer Science. Given the following implementation using a 3 8 decoder with negated outputs, what is the function K(A,B,C) ?. a. K(A,B,C) = A.B' + A'.B.C b. K(A,B,C) = A.B + A'.B' + A'.C'
E N D
CS147 Lecture 14 Virtual Memory Prof. Sin-Min Lee Department of Computer Science
Given the following implementation using a 38 decoder with negated outputs, what is the function K(A,B,C)? a. K(A,B,C) = A.B' + A'.B.C b. K(A,B,C) = A.B + A'.B' + A'.C' c. K(A,B,C) = (A + B' + C').(A' + B + C).(A' + B + C') d. K(A,B,C) = 1 e. K(A,B,C) = 0
2x4 Decoder J K Q X Q Y Z CLK 000,001,010,011,110,111 000,001,010,100,101,110 100,101 0 1 011,111
Find the Boolean expression for JA, KA, DB and Z • JA = X'QBQA + QBQA‘ • KA = X' + QB • DB = QB'QA + QB = QA + QB • Z = X'QBQA
Find the Boolean expression for the next states QA+ and QB+. • QB+ = DB = QA + QB • For J-K Flip Flop, we have: QA+ = JAQA' + KA'QA • Hence, QA+ = (X'QBQA + QBQA')QA' + (X'+QB)'QA= 0 + QBQA' + (X.QB")QA = QBQA' + XQB'QA
Where can a block be placed in Cache? (2) • Direct mapped Cache • Each block has only one place where it can appear in the cache • (Block Address) MOD (Number of blocks in cache) • Fully associative Cache • A block can be placed anywhere in the cache • Set associative Cache • A block can be placed in a restricted set of places into the cache • A set is a group of blocks into the cache • (Block Address) MOD (Number of sets in the cache) • If there are n blocks in the cache, the placement is said to be n-way set associative
Which Block should be Replaced on a Cache Miss? • When a miss occurs, the cache controller must select a block to be replaced with the desired data • Benefit of direct mapping is that the hardware decision is much simplified • Two primary strategies for full and set associative caches • Random – candidate blocks are randomly selected • Some systems generate pseudo random block numbers, to get reproducible behavior useful for debugging • LRU (Last Recently Used) – to reduce the chance that information that has been recently used will be needed again, the block replaced is the least-recently used one. • Accesses to blocks are recorded to be able to implement LRU
The connection between the CPU and cache is very fast; the connection between the CPU and memory is slower
There are three methods in block placement: Direct mapped: if each block has only one place it can appear in the cache, the cache is said to be direct mapped. The mapping is usually (Block address) MOD (Number of blocks in cache) Fully Associative : if a block can be placed anywhere in the cache, the cache is said to be fully associative. Set associative : if a block can be placed in a restricted setof places in the cache, the cache is said to be set associative . A set is a group of blocks in the cache. A block is first mapped onto a set,and thenthe block can be placed anywhere within that set.The set is usually chosen by bit selection; that is, (Block address) MOD (Number of sets in cache)
Direct mapped cache: A block from main memory can go in exactly one place in the cache. This is called direct mapped because there is direct mapping from any block address in memory to a single location in the cache. cache Main memory
Fully associative cache : A block from main memory can be placed in any location in the cache. This is called fully associative because a block in main memory may be associated with any entry in the cache. cache Main memory
Set associative cache : The middle range of designs between direct mapped cache and fully associative cache is called set-associative cache. In a n-way set-associative cache a block from main memory can go into n (n at least 2) locations in the cache. 2-way set-associative cache Main memory Memory/Cache Related Terms
Replacing Data • Initially all valid bits are set to 0 • As instructions and data are fetched from memory, the cache is filling and some data need to be replaced. • Which ones? • Direct mapping – obvious
Replacement Policies for Associative Cache • FIFO - fills from top to bottom and goes back to top. (May store data in physical memory before replacing it) • LRU – replaces the least recently used data. Requires a counter. • Random
Replacement in Set-Associative Cache • Which if n ways within the location to replace? • FIFO • Random • LRU Accessed locations are D, E, A
Cache Performance • Cache hits and cache misses. • Hit ratio is the percentage of memory accesses that are served from the cache • Average memory access time TM = h TC + (1- h)TP Tc = 10 ns Tp = 60 ns
Associative Cache FIFO h = 0.389 TM = 40.56 ns • Access order A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 V0 G3 C2 H7 I6 A0 B0 Tc = 10 ns Tp = 60 ns
Direct-Mapped Cache h = 0.167 TM = 50.67 ns • Access order A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 V0 G3 C2 H7 I6 A0 B0 Tc = 10 ns Tp = 60 ns
2-Way Set Associative Cache LRU h = 0.31389 TM = 40.56 ns • Access order A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 V0 G3 C2 H7 I6 A0 B0 Tc = 10 ns Tp = 60 ns
Associative Cache(FIFO Replacement Policy) A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 B0 G3 C2 H7 I6 A0 B0 Hit ratio = 7/18
Two-way set associative cache(LRU Replacement Policy) A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 B0 G3 C2 H7 I6 A0 B0 Hit ratio = 7/18
Associative Cache with 2 byte line size (FIFO Replacement Policy) A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 B0 G3 C2 H7 I6 A0 B0 A and J; B and D; C and G; E and F; and I and H Hit ratio = 11/18
Direct-mapped Cachewith line size of 2 bytes A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 B0 G3 C2 H7 I6 A0 B0 A and J; B and D; C and G; E and F; and I and H Hit ratio 7/18
Two-way set Associative Cachewith line size of 2 bytes A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 B0 G3 C2 H7 I6 A0 B0 A and J; B and D; C and G; E and F; and I and H Hit ratio = 12/18
Page Replacement - FIFO • FIFO is simple to implement • When page in, place page id on end of list • Evict page at head of list • Might be good? Page to be evicted has been in memory the longest time • But? • Maybe it is being used • We just don’t know • FIFO suffers from Belady’s Anomaly – fault rate may increase when there is more physical memory!
Parkinson's law : "Programs expand to fill the memory available to hold them" • Idea : Manage the storage available efficiently between the available programs.
Before VM… • Programmers tried to shrink programs to fit tiny memories • Result: • Small • Inefficient Algorithms
Solution to Memory Constraints • Use a secondary memory such as disk • Divide disk into pieces that fit memory (RAM) • Called Virtual Memory
Implementations of VM • Paging • Disk broken up into regular sized pages • Segmentation • Disk broken up into variable sized segments
Memory Issues • Idea: Separate concepts of • address space Disk • memory locations RAM • Example: • Address Field = 216 = 65536 memory cells • Memory Size = 4096 memory cells How can we fit the Address Space into Main Memory?
Paging • Break memories into Pages • NOTE: normally Main Memory has thousands of pages 1 page = 4096 bytes page page page New Issue: How to manage addressing?
Address Mapping Mapping 2ndary Memory addresses to Main Memory addresses 1 page = 4096 bytes page page page virtual address physical address