760 likes | 896 Views
CS 320, Operating Systems. Chapter 1. The god Janus, beardless, Roman coin; in the Bibliothèque Nationale , Paris (credit: Larousse) Found at answers.com. Found at www.pantheon.org.
E N D
CS 320, Operating Systems Chapter 1
The god Janus, beardless, Roman coin; in the BibliothèqueNationale, Paris (credit: Larousse)Found at answers.com
Found at www.pantheon.org • Janus is the Roman god of gates and doors (ianua), beginnings and endings, and hence represented with a double-faced head, each looking in opposite directions. • He was worshipped at the beginning of the harvest time, planting, marriage, birth, and other types of beginnings, especially the beginnings of important events in a person's life. • Janus also represents the transition between primitive life and civilization, between the countryside and the city, peace and war, and the growing-up of young people.
… • Janus was represented with two faces, originally one face was bearded while the other was not (probably a symbol of the sun and the moon). • Later both faces were bearded. • In his right hand he holds a key. • The double-faced head appears on many Roman coins, and around the 2nd century BCE even with four faces.
1.1 What Operating Systems Do • System components • Users • Application programs • O/S • Hardware • The O/S provides an environment where other programs can do useful work
Purposes of the O/S from the user point of view • Provide for ease of use (PC) • Provide for resource utilization (mainframe) • Some combination of the two (networked workstations, handheld devices, etc.)
Purpose of the O/S from the system point of view • Resource allocator • Control program
The “Janus” summary: • The O/S can be regarded as a control program—a means by which the user can control the machine. • The O/S can also be regarded as a service program—it makes available to the user all of the capabilities of the machine. • (A perfect antithesis would be, “a means by which the machine can control the user…”)
Defining an O/S • There is no single definition • The book’s preference: The program that’s running on a system at all times • Commercial aspect: What vendors choose to include • Legal aspect: What the DOJ said MS could include without violating anti-trust laws (no browser, for example)
Inside the machine: • The CPU and controllers run concurrently • They share and compete for the bus and main memory
Start-up sequence—where does the O/S come from and how is it loaded onto the machine? • Hardware switched on • Bootstrap program loaded (from ROM) • Bootstrap program initializes hardware (registers, etc.) • Bootstrap program loads O/S from disk • Bootstrap program starts O/S running • O/S begins in the idle state
Interrupts: How the O/S is put into action—how different capabilities of the O/S are triggered: • Hardware interrupts generated by attached devices • Software interrupts generated by application programs
When an interrupt is received the machine instruction pointer is set to the initial address of the (O/S) handler code corresponding to the interrupt • When the interrupt handling code finishes, the instruction pointer is reset to the address of the instruction following the last one executed before interrupt handling began
Some systems use a stack-oriented architecture • More commonly, systems use a register-oriented architecture • For the purposes of understanding interrupt handling and other processing, consider the registers on a chip such as the Intel 8088
AX, BX, CX, DX: General purpose • SI, DI: Source and destination for string ops • SP, BP: Stack pointer (top), base pointer (bottom of current stack frame) • IP: Instruction pointer or program counter • Flag register: Miscellaneous information • CS, DS, SS, ES: Segment registers for forming addresses
A more complete sequence of events for interrupt handling • O/S receives interrupt • Saves state for current process, including the instruction pointer (IP) • The state for a process is saved in a process control block (PCB) • The PCB is a data structure, or object, which contains fields for every important piece of information about a running program
The PCB added to waiting queue • The waiting queue is a linked structure (software) maintained by the O/S • In other words, the waiting queue is a part of the O/S internals • The instruction pointer may be specifically saved in a fixed memory location or on the stack for ease of access
After saving the state of the previous process, the O/S: • Looks up interrupt address in the interrupt vector table (IVT) • Loads that address into the instruction pointer register • Starts execution (of interrupt handling code) • When that code is finished, the O/S reloads the previous IP value
The Von Neumann execution cycle—how programs loaded in memory are executed in hardware • Fetch = retrieve the instruction at the address in the IP register • Decode = put the instruction into the instruction register and determine the operation and operands contained in it • Execute, possibly leading to various values in general purpose and other registers • (Repeat)—the machine automatically increments the IP unless there is a jump instruction, an interrupt occurs, the program comes to an end, etc.
How the O/S Accesses Main Memory • Storage structure • Main memory = RAM • The only large storage area the processor can address directly • Structured as an array of words • Access is on the basis of a linear sequence of word addresses
Memory management unit • This is specialized hardware on the CPU • Fast memory access needs hardware support
The MMU works this way • One of the parameters needed for a memory load instruction is the address of the desired word • This address is put into the memory address register. • The other parameter needed is the register where the retrieved memory will be put
With these parameter set, execution of a LOAD instruction retrieves the value at the given memory address and puts it into the specified general purpose register • Execution of a STORE instruction is analogous, taking the value from a general purpose register and storing it at the desired address
More on memory management • Note that the function of the MMU doesn’t depend on “what kind” of memory is being accessed • The MMU simply sees addresses and loads and stores values • A user program may issue instructions that use the (data) memory allocated to it • The Von Neumann cycle also automatically accesses (code) memory allocated to a program
In a perfect world, multiple programs and all of their code and data would reside in main memory. • This is not practical for these reasons: • Main memory is too small • Main memory is volatile • Other kinds of storage are needed in a general purpose machine.
A system storage hierarchy • Registers • Cache • Main memory • Electronic disk (dividing line between semiconductor and non-semiconductor tech—may or may not be volatile) • Magnetic disk • Magnetic tapes, optical disks, etc.
Parameters for comparison: Speed, cost, size, volatility • From top to bottom: Fastest to slowest • From top to bottom: Most expensive to least expensive • From top to bottom: Smallest to largest • From top to bottom: Volatile to persistent • Notice that the top is “best” only in speed.
I/O structure • An example of simple I/O is communication with one-character at a time peripherals, like keyboards • One interrupt per character • Device driver issues instructions to controller by loading controller registers • Data is exchanged through O/S buffers in main memory and device buffers on the controller
DMA, a more complex form of I/O • Direct memory access = DMA • A more efficient model is needed for peripherals which transfer large blocks of data, like disks • One interrupt per character would impose too much overhead • In main memory a block is reserved • Registers on the controller are loaded with the block addresses • One interrupt triggers the transfer of a whole block in secondary storage to the reserved block in memory
1.3 Computer System Architecture • Single processor systems • One general purpose CPU • Running a general purpose instruction set • Special purpose processors may exist in device controllers, etc.
Multi-processor systems • >1 processor (general purpose CPU) • Gives increased throughput • Economy of scale through sharing of resources • Increased reliability • Fault tolerance • Graceful degradation
Multi-processor models • Asymmetric multi-processing—master-slave • Symmetric multi-processing (SMP)—peer-to-peer • Dual and multiple core systems (Intel and analogous architectures) are examples of multi-processing • Blade servers also • The O/S has to be written to run on/handle multiple processors
Clustered systems (Beowulf, for example) • Multiple, independent, general purpose systems connected by LAN • In addition to the local O/S, this requires a layer of system software to manage the collaboration between the individual systems
1.4 Operating-System Structure • A large part of the purpose of the next few overheads is to distinguish between these terms: • Multi-processing • Multi-programming • Multi-tasking • On tests, it will be necessary for you to use the terminology the same way the book does.
Multi-processing refers to a hardware architecture which contains more than one CPU • Do not confuse this with multi-programming or multi-tasking
Multi-programming = >1 job loaded in memory at a time (on a single processor system) • When one job has to wait (typically for I/O) another can be scheduled to execute • This maximizes utilization of system resources • It describes an efficient scheduling scheme for a batch system • Note that this term does not imply interactivity. • Do not confuse it with multi-tasking
Multi-tasking = interactive time-sharing • This is multi-programming where switching between jobs is fast enough that user interactivity can be supported. • Individual response times <1 second are reckoned good enough to satisfy most users • User interactions are usually slow I/O operations (from keyboard for example)
Multi-tasking involves scheduling algorithms • These algorithms may also support switching between programs even if they haven’t reached an I/O wait • The term “process” is used to describe a program that has been given a memory footprint (loaded into memory) that is running or ready to be run • However, do not confuse multi-tasking with multi-processing (or multi-programming)
As noted, the term for a runnable program is a process • The business of deciding which program to run is known as process scheduling • This is the question of which process in memory is in possession of the CPU • You can also refer to this as switching between processes or jobs
The business of moving programs back and forth between secondary storage and main memory (loading and storing them) is known as swapping. • On tests, it will be necessary for you to use the terminology switching and swapping correctly.
Scheduling is a large topic • A brief preview of some obvious requirements for successful switching and swapping on a single processor system are given below.
CPU—1 job at a time on a single CPU • No conflicts are possible if only one job at a time is allowed • Moving between memory and CPU = switching • Managing multiple CPU’s is an additional topic
Memory— >1 job in memory at a time is allowed • However, conflicts between jobs in memory can’t be allowed—memory allocation is inviolate • Moving between secondary storage and memory = swapping
Secondary storage— >1 job in storage at a time is allowed • However, no conflicts allowed • Where things are placed in secondary storage may vary over time, but at any given time allocation is inviolate