1 / 35

CS 3501 - Chapter 4 (Sec 5.1 &5.2)

CS 3501 - Chapter 4 (Sec 5.1 &5.2). Dr. Clincy Professor of CS. Dr. Clincy. Lecture. Slide 1. Chapter 4 Objectives. Learn the components common to every modern computer system. Be able to explain how each component contributes to program execution.

guyjimenez
Download Presentation

CS 3501 - Chapter 4 (Sec 5.1 &5.2)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 3501 - Chapter 4 (Sec 5.1 &5.2) Dr. Clincy Professor of CS Dr. Clincy Lecture Slide 1

  2. Chapter 4 Objectives • Learn the components common to every modern computer system. • Be able to explain how each component contributes to program execution. • Understand a simple architecture invented to illuminate these basic concepts, and how it relates to some real architectures. • Know how the program assembly process works. Lecture

  3. Introduction • In Chapter 2, we discussed how binary-coded data is stored and manipulated by various computer system components. • In Chapter 3, we described how fundamental components are designed and built from digital circuits. • Also from Chapter 3, we know that memory is used to store both data and program instructions in binary • Having this background, we can now understand how computer components are fundamentally built • The next question is, how do the various components fit together to create useful computer systems. Lecture

  4. Basic Structure of Computers Coded info is stored in memory for later use Program is stored in memory and determines the processing steps Input unit accepts code info from human operators, electromechanical devices (ie keyboard), other computers via networks ALU uses the coded info to perform the desired operations All actions are coordinated by the control unit The output unit sends the results back out externally Collectively called the I/O unit Collectively called the processor Lecture

  5. CPU Basics • The next question is, how is the program EXECUTED and how is the dataPROCESSED properly ? • The computer’s CPU or Processor • Fetches the program instructions, • Decodes each instruction that is fetched , and • Perform the indicated sequence of operations on the data (execute) • The two principal parts of the CPU are theDatapathand theControl unit. • Datapath - consists of an arithmetic-logic unit (ALU) and network of storage units (registers) that are interconnected by a data bus that is also connected to main memory. • Control Unit - responsible for sequencing the operations and making sure the correct data is in the correct place at the correct time. Lecture

  6. CPU Basics • Registers hold data that can be readily accessed by the CPU – data like addresses, program counter, data, and control info • Registers can be implemented using D flip-flops. • A 32-bit register requires 32 D flip-flops. • There are many different registers – • to store values, • to shift values, • to compare values, • registers that count, • registers that temporary store values, • index registers to control program looping, • stack pointer registers to manage stacks of info for processes, • status or flag registers to hold status or mode of operation, • and general purpose registers Lecture

  7. CPU Basics • The arithmetic-logic unit (ALU) carries out • logical operations (ie. comparisons) and • arithmetic operations (ie. adding or multiplying) • The ALU knows which operations to perform because it is controlled by signals from the control unit. • The control unit determines which actions to carry out according to the values in a program counter register and a status register. • The control unit tells the ALU which registers to use and turns on the correct circuitry in the ALU for execution of the operation. • The control unit uses a program counter register to find the next instruction for execution and uses a status register to keep track of overflows, carries, and borrows. Lecture

  8. The Bus • The CPU shares data with other system components by way of a data bus. • A bus is a set of wires that simultaneously convey a single bit along each line. • One or more devices can share the bus. • The “sharing” often results in communication bottlenecks • The speed of the bus is effect by its length and the number of devices sharing it Lecture

  9. The Bus • Two types of buses are commonly found in computer systems: point-to-point, and multipoint buses. • Point-to-point bus connects two specific devices • Multipoint buses connects a number of devices. Because of the sharing, a bus protocol is used. Lecture

  10. The Bus • Buses consist of data lines, control lines, and address lines. Address lines determine the location of the source or destination of the data. Data lines convey bits from one device to another. Moves the actual information that must be moved from one location to another. Control lines determine the direction of data flow, and when each device can access the bus. • When sharing the bus, concurrent bus requests must be arbitrated. • Four categories of bus arbitration are: • Daisy chain: Permissions are passed from the highest-priority device to the lowest. • Centralized parallel: Each device is directly connected to an arbitration circuit. • Distributed using self-detection: Devices decide which gets the bus among themselves. • Distributed using collision-detection: Any device can try to use the bus. If its data collides with the data of another device, it tries again. Lecture

  11. Types of Buses • Processor-memory bus – short high speed bus used to transfer to and from memory • I/O buses – longer buses that interface with many I/O devices other than the processor • Backplane bus (or system bus) – connects the processor, I/O devices and memory. • Expansion bus – connect external devices • Local bus – a data bus that connect a peripheral device directly to the CPU • Buses from a timing perspective: • Synchronous buses - work off clock ticks – all devices using this bus type are synchronized by the clock rate • Asynchronous buses – control lines coordinate the operations and a “handshaking protocol” is used for the timing. These types of buses can scale better and work with more devices Lecture

  12. Clocks • Every computer contains at least one clock that: • Regulates how quickly instructions can be executed • Synchronizes the activities of its components. • A fixed number of clock cycles are required to carry out each data movement or computational operation. • As a result, instruction performance is measured in clock cycles. • The clock frequency, measured in megahertz or gigahertz, determines the speed with which all operations are carried out. • Clock cycle time is the reciprocal (or inverse) of its clock frequency. • An 800 MHz clock has a cycle time of 1.25 ns. • Clock speed should not be confused with CPU performance. • The CPU time required to run a program is given by the general performance equation: • We see that we can improve CPU throughput when we reduce the number of instructions in a program, reduce the number of cycles per instruction, or reduce the number of nanoseconds per clock cycle. Lecture

  13. The Input/Output Subsystem • A computer communicates with the outside world through its input/output (I/O) subsystem. • Input device examples: keyboard, mouse, card readers, scanners, voice recognition systems, touch screens • Output device examples: monitors, printers, plotters, speakers, headphones • I/O devices connect to the CPU through various interfaces. • I/O can be memory-mapped-- where the I/O device behaves like main memory from the CPU’s point of view. • Or I/O can be instruction-based, where the CPU has a specialized I/O instruction set. Lecture

  14. Memory Organization • We discussed a simple example of how memory is configured in Ch 3 – we now will cover more detail of: • How memory is laid out • How memory is addressed • Envision memory as a matrix of bits – each row implemented as a register or “storage cell” – and each row being the size of a addressable Word. • Each register or storage cell (typically called memory location) has a unique address. • The memory addresses typically start at zero and progress upward Lecture

  15. Memory Organization • Computer memory consists of a linear array of addressable storage cells that are similar to registers. • Memory can be byte-addressable, or word-addressable, where a word typically consists of two or more bytes. • Byte-addressable case: although the Word could be multiple bytes, each individual byte would have an address – with the lowest address being the “address” of the Word • Memory is constructed of RAM chips, often referred to in terms of length  width. • If the memory word size of the machine is 16 bits, then a 4M  16 RAM chip gives us 4 megabytes of 16-bit memory locations. Lecture

  16. Memory Organization • For alignment reasons, in reading 16-bit words on a byte-addressable machine, the address should be a multiple of 2 (i.e 2 bytes) • For alignment reasons, in reading 32-bit words on a byte-addressable machine, the address should be a multiple of 4 (i.e 4 bytes) • For alignment reasons, in reading 64-bit words on a byte-addressable machine, the address should be a multiple of 4 (i.e 8 bytes). Lecture

  17. How does the computer access a memory location corresponds to a particular address? Memory is referred to using notation: Length x Width (L x W) We observe that 4M can be expressed as 2 2 2 20 = 2 22 words – means 4M long with each item 8 bits wide. Provided this is byte-addressable, the memory locations will be numbered 0 through 2 22 -1. Thus, the memory bus of this system requires at least 22 address lines. Memory Organization Dr. Clincy Lecture 17

  18. Memory Organization • Physical memory usually consists of more than one RAM chip. • A single memory module causes all accesses to memory to be sequential - only one memory access can be performed at a time • By splitting or spreading memory across multiple memory modules (or banks), access can be performed in parallel – this is called Memory interleaving • With low-order interleaving, the low order bits of the address specify which memory bank contains the address of interest. • In high-order interleaving, the high order address bits specify the memory bank. Lecture

  19. Memory Organization • Example: Suppose we have a memory consisting of 16 2K x 8 bit chips. • Memory is 32K = 25 210 = 215 • 15 bits are needed for each address. • We need 4 bits to select the chip, and 11 bits for the offset into the chip that selects the byte. Lecture

  20. Memory Organization • In high-order interleaving the high-order 4 bits select the chip. • In low-order interleaving the low-order 4 bits select the chip. Lecture

  21. CS 3501 - Chapter 4 (Sec 5.1 &5.2) Dr. Clincy Professor of CS Dr. Clincy Lecture Slide 21

  22. A Discussion on Assemblers • Mnemonic instructions, such as LOAD 104, are easy for humans to write and understand. Also labels can be used to identify particular memory locations. • They are impossible for computers to understand. • Assemblers translate instructions that are comprehensible to humans into the machine language that is comprehensible to computers • We note the distinction between an assembler and a compiler: In assembly language, there is a one-to-one correspondence between a mnemonic instruction and its machine code. With compilers, this is not usually the case. • Assemblers create an object program file from mnemonic source code (assembly program) in two passes. • During the first pass, the assembler assembles as much of the program as it can, while it builds a symbol tablethat contains memory references for all symbols in the program. • During the second pass, the instructions are completed using the values from the symbol table. Lecture

  23. A Discussion on Assemblers • Consider our example program at the right. • Note that we have included two directives HEX and DEC that specify the radix of the constants. • The first pass, creates a symbol table and the partially-assembled instructions as shown (ie. doesn’t know X is located at address 104). • Also after the first pass, the translated instructions are incomplete Mnemonic instructions or alphanumeric name Label or Memory location name Lecture

  24. A Discussion on Assemblers • After the second pass, the assembler uses the symbol table to fill in the addresses and create the corresponding machine language instructions • After the second pass, it knows X is located at address 104 and that is totally translated to machine code Lecture

  25. Extending Our Instruction Set • So far, all of the MARIE instructions that we have discussed use a direct addressing mode. • This means that the address of the operand is explicitly stated in the instruction. • It is often useful to employ a indirect addressing, where the address of the address of the operand is given in the instruction • If you have ever used pointers in a program, you are already familiar with indirect addressing. Lecture

  26. Extending Our Instruction Set • We have included three indirect addressing mode instructions in the MARIE instruction set. • The first two are LOADI X and STOREI X, where X specifies the address of the operand to be loaded or stored. • In RTL : • It would be the same conceptually for AddI, SubI, JumpI and JnS MAR  X MBR  M[MAR] MAR  MBR MBR  M[MAR] AC  MBR MAR  X MBR  M[MAR] MAR  MBR MBR  AC M[MAR]  MBR LOADI X STOREI X Lecture

  27. Extending Our Instruction Set • Our first new instruction is the CLEAR instruction. • All it does is set the contents of the accumulator to all zeroes. • This is the RTL for CLEAR: AC  0 Lecture

  28. A Discussion on Decoding • As mentioned earlier, the control unit causes the CPU to execute a sequence of steps correctly • There are control signals asserted on various components in making the components active • A computer’s control unit keeps things synchronized, making sure that bits flow to the correct components as the components are needed. • There are two general ways in which a control unit can be implemented: hardwired controlandmicroprogrammed control. • With microprogrammed control, a small program is placed into read-only memory in the microcontroller. • Hardwired controllers implement this program using digital logic components. There is a direct connection between the control lines and the machine instructions. Lecture

  29. A Discussion on Decoding • Your text provides a complete list of the register transfer language (or RTN) for each of MARIE’s instructions. • The RTL or RTN actually defines the microoperations of the control unit. • Each microoperation consists of a distinctive signal pattern that is interpreted by the control unit and results in the execution of an instruction. • The signals are fed to combinational circuits within the control unit that carry out the logical operations for the instruction • Recall, the RTL for the Add instruction is: MAR  X MBR  M[MAR] AC  AC + MBR Lecture

  30. A Discussion on Decoding • Each of MARIE’s registers and main memory have a unique address along the datapath (0 through 7). • The addresses take the form of signals issued by the control unit. • Let us define two sets of three signals. • One set, P2, P1, P0, controls reading from memory or a register, • and the other set consisting of P5, P4, P3, controls writing to memory or a register. • Let’s examine MARIE’s MBR (with address 3) • Keep in mind from Ch 2 how registers are configure using flip-flops Lecture

  31. A Discussion on Decoding - MBR The MBR register is enabled for reading when P0 and P1 are high The MBR register is enabled for writing when P3 and P4 are high Lecture

  32. A Discussion on Decoding • We note that the signal pattern just described is the same whether our machine used hardwired or microprogrammed control. • In hardwired control, the bit pattern of machine instruction in the IR is decoded by combinational logic. • The decoder output works with the control signals of the current system state to produce a new set of control signals. Unique output signal corresponding to the opcode in the IR Produce the series of signals that result in the execution of the microoperations Produces the timing signal for each tick of the clock (sequential logic used here because the series of timing signals is repeated) – for tick, a different group of logic can be activated Lecture

  33. A Discussion on Decoding - ADD Bit pattern for the Add = 0011 instruction in the IR. Timing signal added with instruction bits produce required behavior Result Here Control lines and bits controlling the register functions and the ALU Lecture

  34. A Discussion on Decoding • The hardwired approach is FAST, however, the control logic are tied together via circuits and complex to modify • In microprogrammed control, the control can be easier modified • In microprogrammed control, instruction microcode produces control signal changes. • Machine instructions are the input for a microprogram that converts the 1s and 0s of an instruction into control signals. • The microprogram is stored in firmware, which is also called the control store. • A microcode instruction is retrieved during each clock cycle. Lecture

  35. A Discussion on Decoding • All machine instructions are input into the microprogram. The microprogram’s job is to convert the machine instructions into control signals. • The hardwired approach, timing signals from the clock are ANDed using combinational logic circuits to invoke signals • In the microprogram approach, the instruction microcode produces changes in the data-path signals Lecture

More Related