1 / 45

COT 5611 Operating Systems Design Principles Spring 2010

COT 5611 Operating Systems Design Principles Spring 2010. Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 1:00-2:00 PM. Names and the three basic abstractions. Memory  stores named objects write(name, value) value  READ(name)

naeva
Download Presentation

COT 5611 Operating Systems Design Principles Spring 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COT 5611 Operating SystemsDesign Principles Spring 2010 Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 1:00-2:00 PM

  2. Names and the three basic abstractions • Memory  stores named objects • write(name, value) value  READ(name) • file system: /dcm/classes/Fall09/Lectures/Lecture5.ppt • Interpreters  manipulates named objects • machine instructions ADD R1,R2 • modules  Variables call sort(table) • Communication Links connect named objects • HTTP protocol used by the Web and file systems Host: boticelli.cs.ucf.edu put /dcm/classes/Fall09/Lectures/Lecture5.ppt get /dcm/classes/Fall09/Lectures/Lecture5.ppt 2

  3. Memory • Hardware memory: • Devices • RAM (Random Access Memory) chip • Flash memory  non-volatile memory that can be erased and reprogrammed • Magnetic tape • Magnetic Disk • CD and DVD • Systems • RAID • File systems • DBMS (Data Base management Systems) 3

  4. Attributes of the storage medium/system • Durability  the time it remembers • Stability  whether or not the data is changed during the storage • Persistence  property of data storage system, it keeps trying to preserve the data 4

  5. Critical properties of a storage medium/system • Read/Write Coherence  the result of a READ of a memory cell should be the same as the most recent WRITE to that cell. • Before-or-after atomicity the result of every READ or WRITE is as if that READ or WRITE occurred either completely before or completely after any other READ or WRITE 5

  6. 6

  7. Why it is hard to guarantee the critical properties? • Concurrency  multiple threads could READ/WRITE to the same cell • Remote storage  The delay to reach the physical storage may not guarantee FIFO operation • Optimizations  data may be buffered to increase I/O efficiency • Cell size may be different than data size  data may be written to multiple cells. • Replicated storage  difficult to maintain consistency. 7

  8. Access type; access time • Sequential access • Tapes • CD/DVD • Random access devices • Disk • Seek • Search time • Read/Write time • RAM 8

  9. Physical memory organization • RAM  two dimensional array. To select a flip-flop provide the x and y coordinates. • Tapes  blocks of a given length and gaps (special combination of bits. • Disk: • Multiple platters • Cylinders correspond to a particular position of the moving arm • Track  circular pattern of bits on a given platter and cylinder • Record  multiple records on a track 9

  10. Names and physical addresses Figure 2.2 from textrbook Location addressed memory  the hardware maps the physical coordinates to consecutive integers, addresses Associative memory  unrestricted mapping; the hardware does not impose any constraints in mapping the physical coordinates 10

  11. RAID – Redundant Array of Inexpensive Disks • The abstraction put to work to increase performance and durability. • Raid 0  allows concurrent reading and writing. Increases performance but does not improve reliability. • Raid 1  increases durability by replication the block of data on multiple disks (mirroring) • Raid 2  Disks are synchronized and striped in very small stripes, often in single bytes/words. • Error correction calculated across corresponding bits on disks, and is stored on multiple parity disks (Hamming codes). 11

  12. 12

  13. RAID (cont’d) Raid 3 Striped set with dedicated parity or bit interleaved parity or byte level parity. Raid 4  improves reliability, it adds error correction 13

  14. RAID (cont’d) • Raid 5  striped disks with parity combines three or more disks to protect data against loss of any one disk. • The storage capacity of the array is reduced by one disk • Raid 6  striped disks with dual parity combines four or more disks to protect against loss of any two disks.  • Makes larger RAID groups more practical. • Large-capacity drives lengthen the time needed to recover from the failure of a single drive. Single parity RAID levels are vulnerable to data loss until the failed drive is rebuilt: the larger the drive, the longer the rebuild will take. Dual parity gives time to rebuild the array without the data being at risk if a (single) additional drive fails before the rebuild is complete. 14

  15. Interpreters • The active elements of a computer system • Diverse • Hardware  Processor, Disk Controller, Display controller • Software  • script language: Javascropt, Pearl, Python • text processing systems: Latex, Tex, Word • browser : Safari, Google Chrome,Thunderbird • All share three major abstractions/components: • Instruction reference  tells the system where to find the next instruction • Repertoire  the set of actions (instructions) the interpreter is able to perform • Environment reference  tells the interpreter where to find the its environment, the state in which it should be to execute the next instruction 15

  16. An abstract interpreter The three elements allow us to describe the functioning of an interpreter regardless of its physical realization. Interrupt  mechanism allowing an interpreter to deal with the transfer of control. Once an instruction is executed the control is passed to an interrupt handler which may change the environment for the next instruction. More than a single interpreter may be present. 16

  17. Figure 2.5 from the textbook 17

  18. Processors • Can execute instructions from a specific instruction set • Architecture • PC, IR, SP, GPR, ALU, FPR, FPU • State is saved on a stack by the interrupt handler to transfer control to a different virtual processor, thread. 18

  19. Interpreters are organized in layers Each layer issues instructions/requests for the next. A lower layer generally carries out multiple instruction for each request from the upper layer. 19

  20. Communication Links • Two operations • SEND (link_name, outgoing_message_Buffer) • RECEIVE (link_name, incoming_message_Buffer) • Message  an array of bits • Physical implementation in hardware • Wires • Networks • Ethernet • Internet • The phone system 20

  21. The Internet – an extreme example of what hides behind the communication link abstraction • Internet Core and Edge • The hardware • Router • Network adaptor • Hourglass communication model • Protocol stack • It’s along way to Tipperary – the way a message squizes through protocol layers

  22. Naming • The tree abstractions manipulate objects identified by name. • How could object A access object B: • Make a copy of object B and include it in A  use by value • Safe  there is a single copy of B • How to implement sharing of object B? • Pass to A the means to access B using its name  use by reference • Not inherently safe  both A and C may attempt to modify B at the same time. Need some form of concurrency control.

  23. Binding and indirection • Indirection  decoupling objects from their physical realization through names. • Names allow the system designer to: • organize the modules of a system and to define communication patterns among them • defer for a later time • to create object B referred to by object A • select the specific object A wishes to use • Binding  linking the object to names. Examples: • A compiler constructs • a table of variables and their relative address in the data section of the memory map of the process • a list of unsatisfied external references • A linker binds the external references to modules from libraries

  24. Generic naming model • Naming scheme  strategy for naming. Consists of: • Name space  the set of acceptable names; the alphabet used to select the symbols from and the syntax rules. • Universe of values  set of objects/values to be named • Name mapping algorithm  resolves the names, establishes a correspondence between a name and an object/value • Context  the environment in which the model operates. • Example: searching for John Smith in the White Pages in Orlando (one context) or in Tampa (another context). • Sometimes there is only one context  universal name space; e.g., the SSNs. • Default context

  25. Figure 2.10 from the textbook

  26. Operations on names in the abstract model • Simple models: value  RESOLVE (name, context) • The interpreter: • Determines the version of the RESOLVE (which naming scheme is used) • Identifies the context • Locates the object • Example: the processor • Complex models support: • creation of new bindings: status  BIND(name, value, context) • deletion of old bindings: status  UNBIND(name, value) • enumeration of name space: list  ENUMERATE(context) • comparing names status: result  COMPARE(name1,name2)

  27. Name mapping • Name to value mapping • One-to-One  the name identifies a single object • Many-to-One  multiple names identify one objects (aliasing) • One-to-Many  multiple objects have the same name even in the same context. • Stable bindings  the mapping never change. Examples: • Social Security Numbers • CustomerId for customer billing systems

  28. Name-mapping algorithms • Table lookup • Phone book • Port numbers  a port the end point of a network connection • Recursive lookup: • File systems – path names • Host names – DNS (Domain Name Server) • Names for Web objects - URL – (Universal Resource Locator) • Multiple lookup  searching through multiple contexts • Libraries • Example: the classpath is the path that the Java runtime environment searches for classes and other resource files

  29. 1. Table lookup Figure 2.11 from the textbook

  30. How to determine the context • Context references: • Default  supplied by the name resolver • Constant  built-in by the name resolver • Processor registers (hardwired) • Virtual memory (the page table register of an address space) • Variable  supplied by the current environment • File name (the working directory) • Explicit  supplied by the object requesting the name resolution • Per object • Looking up a name in the phone book • Per name  each name is loaded with its own context reference (qualified name). • URL • Host names used by DNS

  31. Dynamic and multiple contexts • Context reference static/dynamic. • Example: the context of the “help” command is dynamic, it depends where you are the time of the command. • A message is encapsulated (added a new header, ) as flows down the protocol stack: • Application layer (application header understood only in application context) • Transport layer (transport header understood only in the transport context) • Network layer (network header understood only in the network context) • Data link layer (data link header understood only in the data link context)

  32. 2. Recursive name resolution • Contexts are structured and a recursion is needed for name resolution. • Root  a special context reference - a universal name space • Path name  name which includes an explicit reference to the context in which the name is to be resolved. • Example: first paragraph of page 3 in part 4 of section 10 of chapter 1 of book “Alice in Wonderland.” • The path name includes multiple components known to the user of the name and to name solver • The least element of the path name must be an explicit context reference • Absolute path name  the recursion ends at the root context. • Relative path name  path name that is resolved by looking up its mot significant component of the path name

  33. Example • AliceInWonderland.Chapter1.Section10.Part4.Page3.FirstParagraph Most significant   Least significant

  34. 3. Multiple lookup • Search path  a list of contexts to be searched Example: the classpath is the path that the Java runtime environment searches for classes and other resource files • User-specfic search paths  user-specific binding • The contexts can be in concentric layers. If the resolver fails in a inner layer it moves automatically to the outer layer. • Scope of a name  the range of layers in which a name is bound to the same object.

  35. Comparing names • Questions • Are two names the same?  easy to answer • Are two names referring to the same object (bound to the same value)?  harder; we need the contexts of the two names. • If the objects are memory cells are the contents of these cells the same?

  36. Name discovery • Two actors: • The exporter  advertizes the existence of the name. • The prospective user  searches for the proper advertisement.Example: the creator of a math library advertizes the functions. • Methods • Well-known names • Broadcasting • Directed query • Broadcast query • Introduction • Physical randezvoue

  37. Computer System Organization • Operating Systems (OS)  software used to • Control the allocation of resources (hardware and software) • Support user applications • Sandwiched between the hardware layer and the application layer • OS-bypass: the OS does not hide completely the hardware from applications. It only hides dangerous functions such as • I/O operations • Management function • Names  modularization

  38. Figure 2.16 from the textbook

  39. The hardware layer • Modules representing each of the three abstractions (memory, interpreter, communication link) are interconnected by a bus. • The bus  a broadcast communication channel, each module hears every transmission. • Control lines • Data lines • Address lines • Each module • is identified by a unique address • has a bus interface • Modules other than processors need a controller.

  40. Figure 2.17 from the textbook 40

  41. Bus sharing and optimization • Communication  broadcast • Arbitration protocol  decide which module has the control of the bus. Supported by hardware: • a bus arbiter circuit • distributed among interfaces – each module has a priority • daisy chaining • Split-transaction  a module uses the arbitration protocol to acquire control of the bus • Optimization: • hide the latency of I/O devices • Channels  dedicated processors capable to execute a channel program (IBM) • DMA (Direct Memory Access) • Support transparent access to files: • Memory Mapped I/O

  42. Optimization • Direct Memory Access (DMA): • supports direct communication between processor and memory; the processor provides the disk address of a block in memory where data is to be read into or written from. • hides the disk latency; it allows the processor to execute a process while data is transferred • Memory Mapped I/O: • LOAD and STORE instructions access the registers and buffers of an I/O module • bus addresses are assigned to control registers and buffers of the I/O module • the processor maps bus addresses to its own address space (registers) • Supports software functions such as UNIX mmap which map an entire file. • Swap area: disk image of the virtual memory of a process.

  43. DMA Transfer 43

  44. B. The software layer: the file abstraction • File: memory abstraction used by the application and OS layers • linear array of bits/bytes • properties: • durable  information will not be changed in time • has a name  • allows access to individual bits/bytes  has a cursor which defines the current position in the file. • The OS provides an API (Application Programming Interface) supporting a range of file manipulation operations. • A user must first OPEN a file before accessing it and CLOSE it after it has finished with it. This strategy: • allows different access rights (READ, WRITE, READ-WRITE) • coordinate concurrent access to the file • Some file systems • use OPEN and CLOSE to enforce before-or-after atomicity • support all-or-nothing atomicity e.g., ensure that if the system crashes before a CLOSE either all or none of WRITEs are carried out

  45. Open and Read operations 45

More Related