1 / 24

Agenda

Agenda. Fail Stop Processors Problem Definition Implementation with reliable stable storage Implementation without reliable stable storage Failure Detection and Fault Diagnosis System-Level Fault Diagnosis Example Reliable Message Delivery Problem Definition Implementation Example.

Download Presentation

Agenda

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Agenda • Fail Stop Processors • Problem Definition • Implementation with reliable stable storage • Implementation without reliable stable storage • Failure Detection and Fault Diagnosis • System-Level Fault Diagnosis • Example • Reliable Message Delivery • Problem Definition • Implementation • Example

  2. Definitions • Fail Stop Processors (FSP) assumes that any processor, upon failure, does not perform any incorrect actions and simply ceases to function. • The visible effects of a failure of a FSP are: • It stops executing. • The internal state and the contents of the volatile storage connected to the processor are lost; the state of the stable storage is unaffected by it. • Any processor can the detect the failure of a FSP. • K-fail-stop processor is a computing system that behaves like a FSP unless k + 1 or more components of the system fail.

  3. Implementation • A FSP reads data from stable storage and writes data to it. • Processors also interact with each other through the stable storage, as the internal state of a processor or its volatile memory is not visible to other processors. • The functioning of a FSP will depend on the reliability of the available stable storage. • A stable storage is typically a storagemedium that is controlled by some active device (i.e. the controller), which is controlled by the program running on the processor.

  4. p1 p2 p3 pn ps Stable Storage Fail Stop Processors Code Segment (Copy #1) Code Segment (Copy #2) Code Segment (Copy #3) Code Segment (Copy #n)

  5. Implementation with reliable stable storage: Assumptions • The storage process (s-process) works correctly and does not fail. • The system consists of k + 1 ordinary processes (p-processes) and one s-process. • P-processes can fail in an arbitrary manner, no assumption is done regarding their behavior when fail. • All processes are connected via reliable communication network. • The origin of messages can be authenticated by its receiver. • All clocks are non faulty and all processes are synchronized and run at the same rate.

  6. Implementation with reliable stable storage • The non-failed processors make the same sequence of requests to the s-processor. • A failure is detected by the s-process if any of the requests from the k + 1 processors has failed. • Synchronization clocks are needed so that all the copies of a particular request from the non-faulty p-processors will have the same time stamp and guaranteed to arrive within some time interval. • Failure is detected when by the s-process when when processes try to access the stable storage.

  7. Implementation with reliable stable storage R  bag of received requests with proper time stamp If |R| = k + 1  all requests are identical  all requests are from different processes  ¬failedthen If request is write, write the stable storage elseif request is read, send value to all processes else /* k-fail stop processor has failed */ failed  true

  8. Implementation without reliable stable storage: Assumptions • k + 1 p-processes. • 2k + 1 s-processes. • A copy of all the variables are kept in the stable storage is stored by each s-process. • At each variable update, all non-faulty s-processes update their variables. • All p-processes are connected to the s-processes through reliable communication network. • The failure is detected when the p-process write to the stable storage.

  9. p1 p2 p3 Pk+1 Implementation without reliable stable storage: Assumptions Code Segment (Copy #1) Code Segment (Copy #2) Code Segment (Copy #3) Code Segment (Copy #n) ps ps ps P2k+1 Stable Storage

  10. Implementation without reliable stable storage: Failure detection • As long as less than k + 1 p-processes fail, disagreement among the values of the different p-processes will occur. • And, as long as less than k + 1 s-processes fail, there will still be a majority of s-processes that will not have failed. • Hence, as long as only up to k processes fail in all, we can be sure that at least one p-process is working correctly, and the majority of s-processes are working correctly. • Every request of an update to a variable in the stable storage is sent by a p-process to every s-process. • Each s-process should receive the request from all non-failed p-processes.

  11. Continue… • If Pj is non-faulty, then every non-faulty s-process receives the request of Pj. • If s-processes Sk and Sl are non-faulty, then both agree on every request received from Pj.

  12. Failure detection algorithm 1- For writing the stable storage, a p-process Pj initiates a Byzantine agreement with all s-processes. 2- For reading the stable storage, a p-process Pj: (a) Broadcasts the request to s-processes. (b) Uses the majority value which obtained from at least k + 1 s-processes. 3- An s-process Si, on receiving a request from all the p-processes: M  bag of requests received if the request is read then: send requested value to all p- processes whose request is in M

  13. Failure detection algorithm, continue… if request is write then: If |M| = k + 1  all requests are identical  all requests are from different processes  ¬failedthen write the value else { failed  true send message “halt” to all p-processes }

  14. Failure Detection and Fault Diagnosis: System-Level Fault Diagnosis • Basic goal: Identifying all the faulty units in a system. • Example, PMC model: • A system S is decomposed into n units, not necessarily identical, denoted by the set U = {u1, u2, …, un}. • Each unit is well-defined and cannot decomposed further for the purpose of diagnosis (i.e. the whole unite is either works correctly or considered faulty). • The status does not change during diagnosis. • A fault-free unit always reports the status it tests correctly.

  15. PMC Model • Each unit in U is assigned a particular subset of U to test (no unit tests itself). The set of test results called “syndrome”. • The complete set of tests is called connectionassignment and represented as graph G = (U, E), where nodes are units and links are testing links. a12 = x U1 a56 = 1 Our syndrome is (x, 0, 0, 0, 1) Where 1: failed, 0: not failed U2 U5 a23 = 0 U3 a45 = 0 U4 a34 = 0

  16. Continue… • Definition: A system S is t-fault diagnosable (or t-diagnosable) if, given a syndrome, all faulty units in S can be identified, provided that the number of faulty units does not exceed t. • Our previous example is one-step one-fault diagnosable, since the faulty node can always be determined using the following method: • If in a syndrome a string of 0s is followed by a 1, then the 1 correctly represents the faulty unit.

  17. Continue… • The graph is one-step 2-diagnosable because if both u1 and u2 were faulty, and u2 returned 0 (since it could return any value), the syndrome of this system would be indistinguishable from the syndrome of the previous system. • Generally, if no two units test each other, then the following two conditions are sufficient for a system S with n units to be t-diagnosable: I) n 2t + 1, and II) each unit is tested by at least t others.

  18. Fault Diagnosis in Distributed Systems • Adaptive Distributed System-level Diagnosis (Adaptive DSD) works as follow: • t i • repeat • t  (t + 1) modn • request t to forward TESTED_Upt to i • until (i tests t as “fault-free”) • TESTED_Upt [i]  t • forj  1 to (n – 1) do • if (i  t) • TESTED_Upi [j]  TESTED_Upt [j]

  19. x Faulty x Fault free Example 1, 0 0 1 0 7 2 0 3 1, 1, 0 6 5 4 0 1 2 3 4 5 6 7 0 0 1 0 0 1 1 0 0

  20. Reliable Message Delivery: Problem Definition • In distributed system, it is frequently assumed that a message sent by one node to another arrives uncorrupted at the receiver, and the that the message order is preserved between two nodes. • The following properties should be hold for any network: • A message sent from i is received correctly by j. • Messages sent from i are delivered to j in the order in which i sent them.

  21. Reliable Message Delivery: Implementation • Error detection: using error detection/correction code (i.e. CRC). • Message ordering and guaranteed delivery: using sliding window protocols. • Failures: sliding window protocols guarantee message delivery and message ordering if no failures occur. But what happen when failure occurs? We assume that the failure does not partition the network.

  22. Continue… • Each node sends to each neighbor a list of its estimated delays to each destination (not just the neighbor). • Assume node i gets information from its neighbor j, which says that the estimated delay from j to a node k is d . • Since i knows the delay to j (assume it is xj). • This means that i can send a message to k via j and the estimated delay will be d + xj . • If Ni is the set of nodes that are the neighbors of node i, then for destination k, the message is routed through a neighbor j such that: (d + xj)  (d + xl), l  Ni. K j K j K j l j

  23. Example • A packet from node E to node A will be routed through node C and the expected delay is 9. • Also, a packet from D to E will routed through C with an expected delay of 11. • A packet from node F to A will be routed through E (and C) with an expected delay of 17. 10 A 5 B C 7 3 4 D 15 E 8 8 F

  24. Continue… • When node C fails, links CD, CE, and CA will fail. • Nodes A,D, and E will set their cost to C as infinity. • For destination A, the entry will be changed to B. • In the next round, F will detect that the route A via E is no longer optimal because the total cost is 25. • The new route will be through D, with an estimated cost of 21. 10 A 5 B C 7 3 4 D 15 E 8 8 F

More Related