1 / 17

COT 5611 Operating Systems Design Principles Spring 2012

COT 5611 Operating Systems Design Principles Spring 2012. Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM. Lecture 7 - Wednesday February 1. Reading assignment: the class notes “Distributed systems-basic concepts” and “Petri Nets” available online. Last time

HarrisCezar
Download Presentation

COT 5611 Operating Systems Design Principles Spring 2012

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COT 5611 Operating SystemsDesign Principles Spring 2012 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM

  2. Lecture 7 - Wednesday February 1 • Reading assignment: the class notes “Distributed systems-basic concepts” and “Petri Nets” available online. • Last time • Process coordination • Lost messages • Time, timeouts, and message recovery • Causality • Logical clocks • Message delivery rules • FIFO delivery • Causal delivery • Runs and cuts Lecture 7

  3. Today • Distributed snapshots • Enforced modularity  the client server paradigm • Consensus protocols • Modeling concurrency – Petri nets Lecture 7

  4. Runs and cuts; causal history A run  a total ordering R of all the events in the global history of a distributed computation consistent with the local history of each participant process; a run implies a sequence of events as well as a sequence of global states. See the lattice of global states. A cut  subset of the local history of all processes. Cuts provide the necessary intuition to generate global states based on an exchange of messages between a monitor and a group of processes. The cut represents the instance when requests to report individual state are received by the members of the group. Not all cuts are meaningful. The causal history of event e, is the smallest consistent cut of including event e. Lecture 7

  5. Inconsistent cut  C1 is an inconsistent cut (why?) C2 is a consistent cut Lecture 7

  6. Causal history  The causal history of event e is the smallest consistent cut including event e Lecture 7

  7. Chandy-Lamport snapshot protocol The monitor, process p0 sends to itself a ``take snapshot'' message. Let psbe the process from which pi receives the``take snapshot'' message for the first time. Upon receiving the message, the process pi records its local state and relays the ``take snapshot'' along all its outgoing channels without executing any events on behalf of its underlying computation; channel state C(f,i) is set to empty and process pi starts recording messages received over each of its incoming channels. Let psbe the process from which pi receives the "take snapshot" message beyond the first time; process pi stops recording messages along the incoming channel from psand declares channel state C(s,i) as those messages that have been recorded. Lecture 7

  8. The flow of messages is the next example • p0 is the monitor. • In step 0, p0 sends to itself the “take snapshot” message. • In step 1, process p0 sends five “take snapshot” messages labelled (1). • In step 2, each of the five processes, p1, p2, p3, p4, and p5 sends a “take snapshot” message labelled (2). Lecture 7

  9. Snapshot protocol  Each take snapshot message crosses each channel exactly once and every process pi makes its contribution to the global state; a process records its state the first time it receives a take snapshot message and then stops executing the underlying computation for some time. In a fully connected network with n processes the protocol requires n (n-1) messages. Lecture 7

  10. Modularity • A complex system is made out of components, or modules, with well-defined functions. It has a number of desirable properties • supports the separation of concerns, • encourages specialization, • improves maintainability, • reduces costs, • decreases the development time of a system. • Soft modularity  divide a program into modules which call each other and communicate using the procedure call convention. The steps involved in the transfer of the flow of control between the caller and the callee are: • (i) the caller saves its state including the registers, the arguments, and the return address on the stack; • (ii) the callee loads the arguments from the stack, carries out the calculations and then transfers control back to the caller; • (iii) the caller adjusts the stack, restores its registers, and continues its processing. • Communication using shared memory Lecture 7

  11. Challenges of soft modularity • It increases the difficulty of debugging; for example, a call to a module with an infinite loop will never return. • There could be naming conflicts and wrong context specifications. • The caller and the callee are in the same address space and may misuse the stack, e.g., the callee may use registers that the caller has not saved on the stack, and so on. • Soft modularity may be affected by errors in • the run-time system, • the compiler, • or by the fact that different modules are written in different programming languages. • Strongly-typed languages may enforce soft modularity by ensuring type safety at compile or at run time, it may reject operations or function class which disregard the data types, or it may not allow class instances to have their class altered. Lecture 7

  12. Enforced modularity  the client-server paradigm The modules are forced to interact only by sending and receiving messages. More robust design, the clients and the servers are independent modules and may fail separately, the errors cannot propagate from one to another. The servers are stateless, they do not have to maintain state information; the server may fail and then come up without the clients being affected or even notice the failure of the server. Makes an attack less likely; it is difficult for an intruder to guess the format of the messages or the sequence numbers of segments, when messages are transported by TCP. Resources can be managed more efficiently; for example, a server typically consists of an ensemble of systems, a front-end system which dispatches the requests to multiple back-end systems which process the requests. Allows systems with different processor architecture, different operating systems and other system software, to cooperate. Increases the flexibility and choice, the same service could be available from multiple providers, a server may use services provided by other servers, a client may use multiple servers, and so on. Lecture 7

  13. The problems It adds to the complexity of the interactions between a client and a server as it may require conversion from one data format to another, e.g., from little-endian to big-endian or vice-versa, or conversion to a canonical data representation. Uncertainty in terms of response time; some servers may be more performant than the others or may have a lower workload. The clients and the servers communicate through a network that can be congested. Communication through the network adds additional delay to the response time. Security is a major concern, as the traffic between a client and a server can be intercepted. Lecture 7

  14. Remote Procedure Call (RPC) • RPC  used for the implementation of client-server interactions (RFC 1831). RPCs reduce the fate sharing between caller and the callee but take longer than local calls due to communication delays. • A process may use special services PORTMAP or RPCBIND at port 111 to register and for service lookup. RPC messages must be well-structured; they identify the RPC and are addressed to an RPC demon listening at an RPC port. • XDP  machine independent representation standard for RPC. • RPC semantics: • At least once  a message is resent several times and an answer is expected; the server may end up executing a request more than once, but an answer may never be received. This semantics is suitable for operation free of side-effects. • At most once  a message is acted upon at most once. The sender sets up a timeout for receiving the response; when the timeout expires an error code is delivered to the caller. This semantics requires the sender to keep a history of the time-stamps of all messages as messages may arrive out of order. This semantics is suitable for operations which have side effects. • Exactly once: it implements the at most once semantics and request an acknowledgment from the server. Lecture 7

  15. Lecture 7

  16. Lecture 7

  17. Lecture 7

More Related