570 likes | 724 Views
Computer Architecture. Lecture 3 Basic Fundamentals and Instruction Sets. The Task of a Computer Designer. 1.1 Introduction 1.2 The Task of a Computer Designer 1.3 Technology and Computer Usage Trends 1.4 Cost and Trends in Cost 1.5 Measuring and Reporting Performance
E N D
Computer Architecture Lecture 3 Basic Fundamentals and Instruction Sets
The Task of a Computer Designer 1.1 Introduction 1.2 The Task of a Computer Designer 1.3 Technology and Computer Usage Trends 1.4 Cost and Trends in Cost 1.5 Measuring and Reporting Performance 1.6 Quantitative Principles of Computer Design 1.7 Putting It All Together: The Concept of Memory Hierarchy Evaluate Existing Systems for Bottlenecks Implementation Complexity Benchmarks Technology Trends Implement Next Generation System Simulate New Designs and Organizations Workloads
Technology and Computer Usage Trends • When building a Cathedral numerous very practical considerations need to be taken into account: • available materials • worker skills • willingness of the client to pay the price. 1.1 Introduction 1.2 The Task of a Computer Designer 1.3 Technology and Computer Usage Trends 1.4 Cost and Trends in Cost 1.5 Measuring and Reporting Performance 1.6 Quantitative Principles of Computer Design 1.7 Putting It All Together: The Concept of Memory Hierarchy • Similarly, Computer Architecture is about working within constraints: • What will the market buy? • Cost/Performance • Tradeoffs in materials and processes
Trends Gordon Moore (Founder of Intel) observed in 1965 that the number of transistors that could be crammed on a chip doubles every year. This has CONTINUED to be true since then.
Measuring And Reporting Performance 1.1 Introduction 1.2 The Task of a Computer Designer 1.3 Technology and Computer Usage Trends 1.4 Cost and Trends in Cost 1.5 Measuring and Reporting Performance 1.6 Quantitative Principles of Computer Design 1.7 Putting It All Together: The Concept of Memory Hierarchy • This section talks about: • Metrics – how do we describe in a numerical way the performance of a computer? • What tools do we use to find those metrics?
Plane DC to Paris Speed Passengers Throughput (pmph) Boeing 747 6.5 hours 610 mph 470 286,700 BAD/Sud Concodre 3 hours 1350 mph 132 178,200 Metrics • Time to run the task (ExTime) • Execution time, response time, latency • Tasks per day, hour, week, sec, ns … (Performance) • Throughput, bandwidth
Metrics - Comparisons "X is n times faster than Y" means ExTime(Y) Performance(X) --------- = --------------- ExTime(X) Performance(Y) Speed of Concorde vs. Boeing 747 Throughput of Boeing 747 vs. Concorde
Metrics - Comparisons Pat has developed a new product, "rabbit" about which she wishes to determine performance. There is special interest in comparing the new product, rabbit to the old product, turtle, since the product was rewritten for performance reasons. (Pat had used Performance Engineering techniques and thus knew that rabbit was "about twice as fast" as turtle.) The measurements showed: Performance Comparisons ProductTransactions / secondSeconds/ transactionSeconds to process transaction Turtle 30 0.0333 3 Rabbit 60 0.0166 1 Which of the following statements reflect the performance comparison of rabbit and turtle? o Rabbit is 100% faster than turtle. o Rabbit is twice as fast as turtle. o Rabbit takes 1/2 as long as turtle. o Rabbit takes 1/3 as long as turtle. o Rabbit takes 100% less time than turtle. o Rabbit takes 200% less time than turtle. o Turtle is 50% as fast as rabbit. o Turtle is 50% slower than rabbit. o Turtle takes 200% longer than rabbit. o Turtle takes 300% longer than rabbit.
Application Answers per month Operations per second Programming Language Compiler (millions) of Instructions per second: MIPS (millions) of (FP) operations per second: MFLOP/s ISA Datapath Megabytes per second Control Function Units Cycles per second (clock rate) Transistors Wires Pins Metrics - Throughput
Methods For Predicting Performance • Benchmarks, Traces, Mixes • Hardware: Cost, delay, area, power estimation • Simulation (many levels) • ISA, RT, Gate, Circuit • Queuing Theory • Rules of Thumb • Fundamental “Laws”/Principles
Benchmarks SPEC: System Performance Evaluation Cooperative • First Round 1989 • 10 programs yielding a single number (“SPECmarks”) • Second Round 1992 • SPECInt92 (6 integer programs) and SPECfp92 (14 floating point programs) • Compiler Flags unlimited. March 93 of DEC 4000 Model 610: spice: unix.c:/def=(sysv,has_bcopy,”bcopy(a,b,c)= memcpy(b,a,c)” wave5: /ali=(all,dcom=nat)/ag=a/ur=4/ur=200 nasa7: /norecu/ag=a/ur=4/ur2=200/lc=blas • Third Round 1995 • new set of programs: SPECint95 (8 integer programs) and SPECfp95 (10 floating point) • “benchmarks useful for 3 years” • Single flag setting for all programs: SPECint_base95, SPECfp_base95
Benchmarks CINT2000 (Integer Component of SPEC CPU2000): Program Language What Is It 164.gzip C Compression 175.vpr C FPGA Circuit Placement and Routing 176.gcc C C Programming Language Compiler 181.mcf C Combinatorial Optimization 186.crafty C Game Playing: Chess 197.parser C Word Processing 252.eon C++ Computer Visualization 253.perlbmk C PERL Programming Language 254.gap C Group Theory, Interpreter 255.vortex C Object-oriented Database 256.bzip2 C Compression 300.twolf C Place and Route Simulator http://www.spec.org/osg/cpu2000/CINT2000/
Benchmarks CFP2000 (Floating Point Component of SPEC CPU2000): Program Language What Is It 168.wupwise Fortran 77 Physics / Quantum Chromodynamics 171.swim Fortran 77 Shallow Water Modeling 172.mgrid Fortran 77 Multi-grid Solver: 3D Potential Field 173.applu Fortran 77 Parabolic / Elliptic Differential Equations 177.mesa C 3-D Graphics Library 178.galgel Fortran 90 Computational Fluid Dynamics 179.art C Image Recognition / Neural Networks 183.equake C Seismic Wave Propagation Simulation 187.facerec Fortran 90 Image Processing: Face Recognition 188.ammp C Computational Chemistry 189.lucas Fortran 90 Number Theory / Primality Testing 191.fma3d Fortran 90 Finite-element Crash Simulation 200.sixtrack Fortran 77 High Energy Physics Accelerator Design 301.apsi Fortran 77 Meteorology: Pollutant Distribution http://www.spec.org/osg/cpu2000/CFP2000/
Sample Results For SpecINT2000 Benchmarks http://www.spec.org/osg/cpu2000/results/res2000q3/cpu2000-20000718-00168.asc Base Base Base Peak Peak Peak Benchmarks Ref Time Run Time Ratio Ref Time Run Time Ratio 164.gzip 1400 277 505* 1400 270 518* 175.vpr 1400 419 334* 1400 417 336* 176.gcc 1100 275 399* 1100 272 405* 181.mcf 1800 621 290* 1800 619 291* 186.crafty 1000 191 522* 1000 191 523* 197.parser 1800 500 360* 1800 499 361* 252.eon 1300 267 486* 1300 267 486* 253.perlbmk 1800 302 596* 1800 302 596* 254.gap 1100 249 442* 1100 248 443* 255.vortex 1900 268 710* 1900 264 719* 256.bzip2 1500 389 386* 1500 375 400* 300.twolf 3000 784 382* 3000 776 387* SPECint_base2000 438 SPECint2000 442 Intel OR840(1 GHz Pentium III processor)
Benchmarks Performance Evaluation • “For better or worse, benchmarks shape a field” • Good products created when have: • Good benchmarks • Good ways to summarize performance • Given sales is a function in part of performance relative to competition, investment in improving product as reported by performance summary • If benchmarks/summary inadequate, then choose between improving product for real programs vs. improving product to get more sales;Sales almost always wins! • Execution time is the measure of computer performance!
Benchmarks How to Summarize Performance Management would like to have one number. Technical people want more: • They want to have evidence of reproducibility – there should be enough information so that you or someone else can repeat the experiment. • There should be consistency when doing the measurements multiple times. How would you report these results?
Quantitative Principles of Computer Design 1.1 Introduction 1.2 The Task of a Computer Designer 1.3 Technology and Computer Usage Trends 1.4 Cost and Trends in Cost 1.5 Measuring and Reporting Performance 1.6 Quantitative Principles of Computer Design 1.7 Putting It All Together: The Concept of Memory Hierarchy • Make the common case fast. • Amdahl’s Law: • Relates total speedup of a system to the speedup of some portion of that system.
Quantitative Design Amdahl's Law Suppose that enhancement E accelerates a fraction F of the task by a factor S, and the remainder of the task is unaffected Speedup due to enhancement E: This fraction enhanced
Cycles Per Instruction Quantitative Design • CPI = (CPU Time * Clock Rate) / Instruction Count • = Cycles / Instruction Count Invest Resources where time is Spent! Number of instructions of type I. “Instruction Frequency” where
Cycles Per Instruction Quantitative Design Suppose we have a machine where we can count the frequency with which instructions are executed. We also know how many cycles it takes for each instruction type. Base Machine (Reg / Reg) Op Freq Cycles CPI(i) (% Time) ALU 50% 1 .5 (33%) Load 20% 2 .4 (27%) Store 10% 2 .2 (13%) Branch 20% 2 .4 (27%) Total CPI 1.5
Locality of Reference Quantitative Design Programs access a relatively small portion of the address space at any instant of time. There are two different types of locality: Temporal Locality (locality in time): If an item is referenced, it will tend to be referenced again soon (loops, reuse, etc.) Spatial Locality (locality in space/location): If an item is referenced, items whose addresses are close by tend to be referenced soon (straight line code, array access, etc.)
The Concept of Memory Hierarchy 1.1 Introduction 1.2 The Task of a Computer Designer 1.3 Technology and Computer Usage Trends 1.4 Cost and Trends in Cost 1.5 Measuring and Reporting Performance 1.6 Quantitative Principles of Computer Design 1.7 Putting It All Together: The Concept of Memory Hierarchy Fast memory is expensive. Slow memory is cheap. The goal is to minimize the price/performance for a particular price point.
Memory Hierarchy Registers Level 1 cache Level 2 Cache Memory Disk
Memory Hierarchy • Hit: data appears in some block in the upper level (example: Block X) • Hit Rate: the fraction of memory access found in the upper level • Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss • Miss: data needs to be retrieve from a block in the lower level (Block Y) • Miss Rate = 1 - (Hit Rate) • Miss Penalty: Time to replace a block in the upper level + Time to deliver the block the processor • Hit Time << Miss Penalty (500 instructions on 21264!)
Memory Hierarchy Registers Level 1 cache Level 2 Cache Memory Disk • What is the cost of executing a program if: • Stores are free (there’s a write pipe) • Loads are 20% of all instructions • 80% of loads hit (are found) in the Level 1 cache • 97 of loads hit in the Level 2 cache.
The Instruction Set 2.1 Introduction 2.2 Classifying Instruction Set Architectures 2.3 Memory Addressing 2.4 Operations in the Instruction Set 2.5 Type and Size of Operands 2.6 Encoding and Instruction Set 2.7 The Role of Compilers 2.8 The MIPS Architecture Bonus
software instruction set hardware Introduction The Instruction Set Architecture is that portion of the machine visible to the assembly level programmer or to the compiler writer. • What are the advantages and disadvantages of various instruction set alternatives. • How do languages and compilers affect ISA. • Use the DLX architecture as an example of a RISC architecture.
Classifying Instruction Set Architectures 2.1 Introduction 2.2 Classifying Instruction Set Architectures 2.3 Memory Addressing 2.4 Operations in the Instruction Set 2.5 Type and Size of Operands 2.6 Encoding and Instruction Set 2.7 The Role of Compilers 2.8 The DLX Architecture Classifications can be by: • Stack/accumulator/register • Number of memory operands. • Number of total operands.
Basic ISA Classes Instruction Set Architectures Accumulator: 1 address add A acc ¬ acc + mem[A] 1+x address addx A acc ¬ acc + mem[A + x] Stack: 0 address add tos ¬ tos + next General Purpose Register: 2 address add A B EA(A) ¬ EA(A) + EA(B) 3 address add A B C EA(A) ¬ EA(B) + EA(C) Load/Store: 0 Memory load R1, Mem1 load R2, Mem2 add R1, R2 1 Memory add R1, Mem2 ALU Instructions can have two or three operands. ALU Instructions can have 0, 1, 2, 3 operands. Shown here are cases of 0 and 1.
Basic ISA Classes Instruction Set Architectures The results of different address classes is easiest to see with the examples here, all of which implement the sequences for C = A + B. Registers are the class that won out. The more registers on the CPU, the better.
Intel 80x86 Integer Registers Instruction Set Architectures
Memory Addressing 2.1 Introduction 2.2 Classifying Instruction Set Architectures 2.3 Memory Addressing 2.4 Operations in the Instruction Set 2.5 Type and Size of Operands 2.6 Encoding and Instruction Set 2.7 The Role of Compilers 2.8 The DLX Architecture Sections Include: Interpreting Memory Addresses Addressing Modes Displacement Address Mode Immediate Address Mode
Interpreting Memory Addresses Memory Addressing What object is accessed as a function of the address and length? Objects have byte addresses – an address refers to the number of bytes counted from the beginning of memory. Little Endian – puts the byte whose address is xx00 at the least significant position in the word. Big Endian – puts the byte whose address is xx00 at the most significant position in the word. Alignment – data must be aligned on a boundary equal to its size. Misalignment typically results in an alignment fault that must be handled by the Operating System.
Addressing Modes Memory Addressing This table shows the most common modes.
Displacement Addressing Mode Memory Addressing How big should the displacement be? For addresses that do fit in displacement size: Add R4, 10000 (R0) For addresses that don’t fit in displacement size, the compiler must do the following: Load R1, address Add R4, 0 (R1) Depends on typical displaces as to how big this should be. On both IA32 and DLX, the space allocated is 16 bits.
Immediate Address Mode Memory Addressing Used where we want to get to a numerical value in an instruction. At high level: a = b + 3; if ( a > 17 ) goto Addr At Assembler level: Load R2, 3 Add R0, R1, R2 Load R2, 17 CMPBGT R1, R2 Load R1, Address Jump (R1)
Operations In The Instruction Set 2.1 Introduction 2.2 Classifying Instruction Set Architectures 2.3 Memory Addressing 2.4 Operations in the Instruction Set 2.5 Type and Size of Operands 2.6 Encoding and Instruction Set 2.7 The Role of Compilers 2.8 The DLX Architecture Sections Include: Detailed information about types of instructions. Instructions for Control Flow (conditional branches, jumps)
Operator Types Operations In The Instruction Set Arithmetic and logical and, add Data transfer move, load Control branch, jump, call System system call, traps Floating point add, mul, div, sqrt Decimal add, convert String move, compare Multimedia - 2D, 3D? e.g., Intel MMX and Sun VIS
Control Instructions Operations In The Instruction Set Conditional branches are 20% of all instructions!! Control Instructions Issues: • taken or not • where is the target • link return address • save or restore Instructions that change the PC: • (conditional) branches, (unconditional) jumps • function calls, function returns • system calls, system returns
Control Instructions Operations In The Instruction Set • There are numerous tradeoffs: • condition in generalpurpose register • + no special state but uses up a register • -- branch condition separate from branch logic in pipeline • some data for MIPS • > 80% branches use immediate data, > 80% of those zero • 50% branches use == 0 or <> 0 • compromise in MIPS • branch==0, branch<>0 • compare instructions for all other compares There are numerous tradeoffs: Compare and branch + no extra compare, no state passed between instructions -- requires ALU op, restricts code scheduling opportunities Implicitly set condition codes Z, N, V, C + can be set ``for free'' -- constrains code reordering, extra state to save/restore Explicitly set condition codes + can be set ``for free'', decouples branch/fetch from pipeline -- extra state to save/restore
Control Instructions Operations In The Instruction Set • Save or restore state: • What state? • function calls: registers • system calls: registers, flags, PC, PSW, etc • Hardware need not save registers • Caller can save registers in use • Callee save registers it will use • Hardware register save • IBM STM, VAX CALLS • Faster? • Many recent architectures do no register saving • Or do implicit register saving with register windows (SPARC) Link Return Address: implicit register many recent architectures use this + fast, simple -- s/w save register before next call, surprise traps? explicit register + may avoid saving register -- register must be specified processor stack + recursion direct -- complex instructions
Type And Size of Operands 2.1 Introduction 2.2 Classifying Instruction Set Architectures 2.3 Memory Addressing 2.4 Operations in the Instruction Set 2.5 Type and Size of Operands 2.6 Encoding and Instruction Set 2.7 The Role of Compilers 2.8 The DLX Architecture The type of the operand is usually encoded in the Opcode – a LDW implies loading of a word. Common sizes are: Character (1 byte) Half word (16 bits) Word (32 bits) Single Precision Floating Point (1 Word) Double Precision Floating Point (2 Words) Integers are two’s complement binary. Floating point is IEEE 754. Some languages (like COBOL) use packed decimal.
Encoding And Instruction Set 2.1 Introduction 2.2 Classifying Instruction Set Architectures 2.3 Memory Addressing 2.4 Operations in the Instruction Set 2.5 Type and Size of Operands 2.6 Encoding and Instruction Set 2.7 The Role of Compilers 2.8 The DLX Architecture This section has to do with how an assembly level instruction is encoded into binary. Ultimately, it’s the binary that is read and interpreted by the machine.
80x86 Instruction Encoding Encoding And Instruction Set Here’s some sample code that’s been disassembled. It was compiled with the debugger option so is not optimized. for ( index = 0; index < iterations; index++ ) 0040D3AF C7 45 F0 00 00 00 00 mov dword ptr [ebp-10h],0 0040D3B6 EB 09 jmp main+0D1h (0040d3c1) 0040D3B8 8B 4D F0 mov ecx,dword ptr [ebp-10h] 0040D3BB 83 C1 01 add ecx,1 0040D3BE 89 4D F0 mov dword ptr [ebp-10h],ecx 0040D3C1 8B 55 F0 mov edx,dword ptr [ebp-10h] 0040D3C4 3B 55 F8 cmp edx,dword ptr [ebp-8] 0040D3C7 7D 15 jge main+0EEh (0040d3de) long_temp = (*alignment + long_temp) % 47; 0040D3C9 8B 45 F4 mov eax,dword ptr [ebp-0Ch] 0040D3CC 8B 00 mov eax,dword ptr [eax] 0040D3CE 03 45 EC add eax,dword ptr [ebp-14h] 0040D3D1 99 cdq 0040D3D2 B9 2F 00 00 00 mov ecx,2Fh 0040D3D7 F7 F9 idiv eax,ecx 0040D3D9 89 55 EC mov dword ptr [ebp-14h],edx 0040D3DC EB DA jmp main+0C8h (0040d3b8) This code was produced using Visual Studio
80x86 Instruction Encoding Encoding And Instruction Set Here’s some sample code that’s been disassembled. It was compiled with optimization for ( index = 0; index < iterations; index++ ) 00401000 8B 0D 40 54 40 00 mov ecx,dword ptr ds:[405440h] 00401006 33 D2 xor edx,edx 00401008 85 C9 test ecx,ecx 0040100A 7E 14 jle 00401020 0040100C 56 push esi 0040100D 57 push edi 0040100E 8B F1 mov esi,ecx long_temp = (*alignment + long_temp) % 47; 00401010 8D 04 11 lea eax,[ecx+edx] 00401013 BF 2F 00 00 00 mov edi,2Fh 00401018 99 cdq 00401019 F7 FF idiv eax,edi 0040101B 4E dec esi 0040101C 75 F2 jne 00401010 0040101E 5F pop edi 0040101F 5E pop esi 00401020 C3 ret This code was produced using Visual Studio
80x86 Instruction Encoding Encoding And Instruction Set Here’s some sample code that’s been disassembled. It was compiled with optimization for ( index = 0; index < iterations; index++ ) 0x804852f <main+143>: add $0x10,%esp 0x8048532 <main+146>: lea 0xfffffff8(%ebp),%edx 0x8048535 <main+149>: test %esi,%esi 0x8048537 <main+151>: jle 0x8048543 <main+163> 0x8048539 <main+153>: mov %esi,%eax 0x804853b <main+155>: nop 0x804853c <main+156>: lea 0x0(%esi,1),%esi long_temp = (*alignment + long_temp) % 47; 0x8048540 <main+160>: dec %eax 0x8048541 <main+161>: jne 0x8048540 <main+160> 0x8048543 <main+163>: add $0xfffffff4,%esp This code was produced using gcc and gdb. Note that the representation of the code is dependent on the compiler/debugger!
Encoding And Instruction Set 80x86 Instruction Encoding 4 3 1 8 A Morass of disjoint encoding!! ADD Reg W Disp. 6 2 8 8 SHL V/w postbyte Disp. 7 1 8 8 TEST W postbyte Immediate
Encoding And Instruction Set 4 4 8 80x86 Instruction Encoding JE Cond Disp. 16 16 8 CALLF Offset Segment Number 6 2 8 8 MOV D/w postbyte Disp. 5 3 PUSH Reg
The Role of Compilers 2.1 Introduction 2.2 Classifying Instruction Set Architectures 2.3 Memory Addressing 2.4 Operations in the Instruction Set 2.5 Type and Size of Operands 2.6 Encoding and Instruction Set 2.7 The Role of Compilers 2.8 The DLX Architecture Compiler goals: • All correct programs execute correctly • Most compiled programs execute fast (optimizations) • Fast compilation • Debugging support
The Role of Compilers Parsing > intermediate representation Jump Optimization Loop Optimizations Register Allocation Code Generation > assembly code Common SubExpression Procedure in-lining Constant Propagation Strength Reduction Pipeline Scheduling Steps In Compilation