2.68k likes | 8.33k Views
COMPUTER ARCHITECTURE. 1. Introduction by Dr. John Abraham University of Texas- Panam. Computer Architecture. design of computers instruction sets hardware components system organization two parts Instruction set architecture (ISA) hardware-system architecture (HAS).
E N D
COMPUTER ARCHITECTURE 1. Introduction by Dr. John Abraham University of Texas- Panam
Computer Architecture • design of computers • instruction sets • hardware components • system organization • two parts • Instruction set architecture (ISA) • hardware-system architecture (HAS)
Instruction set architecture (ISA) • includes specifications that determine how machine-language programmers will interact with the computer • A computer is generally viewed in terms of ISA • which determines the computational characteristics of the computer
Hardware-System architecture (HSA) • Deals with the computer’s major hardware subsystems • CPU • I/O • HAS includes both the logical design and dataflow organization of these components • HAS determines the efficiency of the machine
computer Family architecture • PCs come with varying HSAs • But they all have the same ISA • A computer family is a set of implementations that share the same or similar ISA.
IBM System/360 family architecture • introduced in in the 60s • Models 20,30,40,44,50,65 and 91 • Different amount of memory, speed, storage, etc. • All ran the same software
Other families • DEC PDP-8 family 1965, PDP-11 1970, VAX-11 family 1978 • CDC 6000 family 1960s, CYBER 170 series in the 70s • IBM System /370 family 1970s • IBM Enterprise System Architecture/370, 1988
Compatibility • Ability of different computers to run the same programs • Upward compatibility • High end computers of the same family can run programs written for low end family members • Downward compatibility • Not always possible, since software written for higher end machines may not run on low end machines
History • First Generation • Second Generation • Third Generation • Fourth Generation
First Generation • One of kind laboratory machine • ENIAC • built by Eckert and Mauchly, consultant: John von Neumann • Not a stored program computer • EDSAC, EDVAC • MARK-1..MARK-IV • Howard Eiken
First Generation cont. • Used vacuum tubes and electromechanical relays • First commercial product - UNIVAC • tens of thousand vacuum tubes consumed much power • Produced lot of heat
Second Generation • Transistor invented in 1948 • John Bardeen, Walter Brattain and William Schockley of Bell Labs • Consumes much less power than vacuum tubes • smaller and more reliable than vacuum tubes • Magnetic-core memory • donut shaped magnetic elements • provided reliability and speed • 1 megabyte core memory cost 1 million dollars
Second Generation cont. • 1950s and early 60s. • Batch processing to maximize CPU • Multiprogramming operating systems were introduced • Operating system loads different programs into non-overlapping parts of memory • Burroughs introduced execution-stack artitecture • uses stack as an integral part of the CPU • provided hardware support for high level languages
Third Generation • 1963-1975 • small scale integration (solid-state) • Medium scale integration • Core memory • Mini computers
Fourth generation • Intel’s first chip in 1973 • VLSI • Micro computers • solid state memory • Inexpensive storage
More speed is needed • weather forecasting • molecular modeling • electronic design • seismic prspecting
How to achieve more speed • Processor arrays • Useful for array manipulations • CPU intensive repetitive operations • Pipelining • Assembly line fashion • Several instructions are worked on simultaneously - all at different stages
Achieving higher speed contd. • RISCs (reduced instruction set computers) • as opposed to complex-instruction-set computers (CISCs) • Multiprocessor computers • Many separate processors. • Alternative architectures • neural networks, dataflow, demand-driven, etc.
Classification of Computer Architectures • Von Neumann Machines • Non-von Neumann Machines
Von Neumann Machines • Hardware has CPU, main memory and IO system • Stored program computer • sequential instruction operation • Single path between CPU and main memory (bottleneck)
Modifications of Von Neuman machines • Harvard architectures • provides independent pathways for data address, data, instruction address, and instructions • Allow CPU to access instruction and data simultaneously
CPU • Control unit • ALU • registers • program counter
Instructions • instructions are stored as values in memory • These values tell the computer what operation to perform • Every instruction has a set of fields • these fields provide specific details to control unit • Instruction format - the way the fields are laid out
Instructions contd. • Instruction size - how many bytes needed. • Operands - data for the operation • Opcode - numeric code representing the instruction • Instruction set - Each CPU has a specific set of instruction it is able to execute • A program is a sequence of instructions
Instruction contd. • Each instruction in a program has a logical address. • Each instruction as a physical address depending on where in memory it is stored. • The sequences of instruction to execcute is called a instruction stream • To keep track of instruction in memory PC is used
von Neumann machine cycle • instruction fetch • instruction execution • After each fetch the PC points to the next physical address
Flynn classification • SISD - single instruction stream, single data stream (von Neumann computer) • The rest are non-von Neumann classification • SIMD - Single instruction stream, multiple data stream. Multiple Cus. One CU controls all other Cus • Processor arrays fall into this category
Flynn classification contd. • MISD - multiple instruction stream, single data stream. No use for this type. • MIMD - Multiple instruction stream, multiple data stream. • Multiprocessors. More than one independent processor.
Parallel processors • Both SIMD and MIMD machines are called parallel processors • They operate in parallel on more than one datum at a time.
Classification of parallel processors • based on memory organization • Global Memory (GM) One global memory is shared by all processors • current high performance computers have this type of memory • Local-memory (LM) Each processor has its own memory. • They share data through a common memory area
SIMD machine characteristics • They distribute processing over a large amount of hardware • They operate concurrently on many different data elements • They perform the same computation on all data elements • One CU(control unit) and many PEs (processing elements)
MIMD machine characteristics • Distribute processing over a number of independent processors • share resources including memory • Each processor operates independently and concurrently • Each processor runs its own program • tightly or loosely coupled
Category examples • SISD (RISC) Uniprocessor MIPS R2000, SUN SPARC, IBM R6000 • SISD (CISC) Uniprocessor IBM PC, DEC PDP-11, VAX-11 • GM-SIMD processor array Burroghs BSP • LM-SIMD Processor array ILLIAC IV, MPP, CM-1 • GM-MIMD Multiprocessor DEC and IBM tightly coupled • LM-MIMD Multiple processor Tandem/16, iPSC/2
Measuring Quality of a computer architecture • Generality • Applicability • Efficiency • Ease of Use • Malleability • Expandability
Generality • Range of applications that can be run on a particular architecture • Generality tends to increase the complexity of application implementations • The more complex a design fewer clones will be made of it.. (Good/Bad?)
Applicability • Utility of architecture for what it was intended for • Scientific and Engineering applications • computation intensive • General commercial applications
Efficiency • Measure of the average amount of hardware that remains busy during normal computer use. • Because of the low cost of hardware now, efficiency is considered very important.
Ease of use • Ease with which system programs can be developed
Malleability • Ease with which computers in the same family can be implemented using this architecture • Example- machines that differ in size and performance
Expandability • How easy is to increase the capabilities of an architecture. • Increase number of devices? Make larger devices?
Factors influencing the success of an architecture • Architectural merit • Open/closed architecture • System performance • System Cost
Architectural merit • Measured by: • applicability • Malleability • Expandability • Compatibility
Open/closed architecture • Example of Open: IBM PC • Example of Closed: Apple
System Performance • Speed of the computer • Benchmark tests • Linpack, Livermore loops, Whetstone, SPEC • Matrics • MIPS, MFLOPS, GFLOPS • Clock ticks per instruction • I/O speed • bandwidth and megabits per second