650 likes | 786 Views
FSA, Hierarchical Memory Systems. Prof. Sin-Min Lee Department of Computer Science. CS147 Lecture 12. Implementing FSM with No Inputs Using D, T, and JK Flip Flops. Convert the diagram into a chart. Implementing FSM with No Inputs Using D, T, and JK Flip Flops Cont. For D and T Flip Flops.
E N D
FSA, Hierarchical Memory Systems Prof. Sin-Min Lee Department of Computer Science CS147 Lecture 12
Implementing FSM with No Inputs Using D, T, and JK Flip Flops Convert the diagram into a chart
Implementing FSM with No Inputs Using D, T, and JK Flip Flops Cont. For D and T Flip Flops
Implementing FSM with No Inputs Using D, T, and JK Flip Flops Cont. For JK Flip Flop
Implementing FSM with No Inputs Using D, T, and JK Flip Flops Cont. Final Implementation
Implementing FSM with No Inputs Using D, T, and JK Flip Flops Convert the diagram into a chart
Implementing FSM with No Inputs Using D, T, and JK Flip Flops Cont. For D and T Flip Flops
Implementing FSM with No Inputs Using D, T, and JK Flip Flops Cont. For JK Flip Flop
Implementing FSM with No Inputs Using D, T, and JK Flip Flops Cont. Final Implementation
How can we create a flip-flop using another flip-flop? • Say we have a flip-flop BG with the following properties: • Let’s try to implement this flip-flop using a T flip-flop
Step 1:Create Table The first step is to draw a table with created flip-flop first (in this case BG), Q, Q+, and the creator flip-flop (in this case T) -Look at Q, Q+ to determine value of T
Step 2:Karnaugh Map • Draw a Karnaugh Map, based on when T is a 1 Q 0 1 BG 00 01 10 11 T=B’G’+BGQ+G’Q’
Step 3: Draw Diagram T=B’Q’+ BGQ+G’Q’ B G Q Q’
The Root of the Problem:Economics • Fast memory is possible, but to run at full speed, it needs to be located on the same chip as the CPU • Very expensive • Limits the size of the memory • Do we choose: • A small amount of fast memory? • A large amount of slow memory?
Memory Hierarchy Design (2) • It is a tradeoff between size, speed and cost and exploits the principle of locality. • Register • Fastest memory element; but small storage; very expensive • Cache • Fast and small compared to main memory; acts as a buffer between the CPU and main memory: it contains the most recent used memory locations (address and contents are recorded here) • Main memory is the RAM of the system • Disk storage - HDD
Memory Hierarchy Design (3) • Comparison between different types of memory HDD Register Cache Memory size: speed: $/Mbyte: 32 - 256 B 2 ns 32KB - 4MB 4 ns $100/MB 128 MB 60 ns $1.50/MB 20 GB 8 ms $0.05/MB larger, slower, cheaper
Memory Hierarchy • Can only do useful work at the top • 90-10 rule: 90% of time is spent of 10% of program • Take advantage of locality • temporal locality keep recently accessed memory locations in cache • spatial locality keep memory locations nearby accessed memory locations in cache
The connection between the CPU and cache is very fast; the connection between the CPU and memory is slower
The Cache Hit Ratio • How often is a word found in the cache? • Suppose a word is accessed k times in a short interval • 1 reference to main memory • (k-1) references to the cache • The cache hit ratio h is then
Reasons why we use cache • Cache memory is made of STATIC RAM – a transistor based RAM that has very low access times (fast) • STATIC RAM is however, very bulky and very expensive • Main Memory is made of DYNAMIC RAM – a capacitor based RAM that has very high access times because it has to be constantly refreshed (slow) • DYNAMIC RAM is much smaller and cheaper
Performance (Speed) • Access time • Time between presenting the address and getting the valid data (memory or other storage) • Memory cycle time • Some time may be required for the memory to “recover” before next access • cycle time = access + recovery • Transfer rate • rate at which data can be moved • for random access memory = 1 / cycle time (cycle time)-1
Memory Hierarchy smallest, fastest, most expensive, most frequently accessed • size ? speed ? cost ? • registers • in CPU • internal • may include one or more levels of cache • external memory • backing store medium, quick, price varies largest, slowest, cheapest, least frequently accessed
Replacing Data • Initially all valid bits are set to 0 • As instructions and data are fetched from memory, the cache is filling and some data need to be replaced. • Which ones? • Direct mapping – obvious
Replacement Policies for Associative Cache • FIFO - fills from top to bottom and goes back to top. (May store data in physical memory before replacing it) • LRU – replaces the least recently used data. Requires a counter. • Random
Replacement in Set-Associative Cache • Which if n ways within the location to replace? • FIFO • Random • LRU Accessed locations are D, E, A