1 / 37

Scalability, Accountability and Instant Information Access for Network Centric Warfare

Scalability, Accountability and Instant Information Access for Network Centric Warfare. Yair Amir, Claudiu Danilov, Jon Kirsch, John Lane, Jonathan Shapiro. Department of Computer Science Johns Hopkins University. Chi-Bun Chan, Cristina Nita-Rotaru, Josh Olsen David Zage.

bob
Download Presentation

Scalability, Accountability and Instant Information Access for Network Centric Warfare

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scalability, Accountability and Instant Information Access for Network Centric Warfare Yair Amir, Claudiu Danilov, Jon Kirsch, John Lane, Jonathan Shapiro Department of Computer Science Johns Hopkins University Chi-Bun Chan, Cristina Nita-Rotaru, Josh Olsen David Zage Department of Computer Science Purdue University http://www.cnds.jhu.edu

  2. Dealing with Insider Threats Project goals: • Scaling survivable replication to wide area networks. • Overcome 5 malicious replicas. • SRS goal: Improve latency by a factor of 3. • Self imposed goal: Improve throughput by a factor of 3. • Dealing with malicious clients. • Compromised clients can inject authenticated but incorrect data - hard to detect on the fly. • Malicious or just an honest error? Can be useful for both. • Exploiting application update semantics for replication speedup in malicious environments. • Weaker update semantics allows for immediate response. Today we focus on scaling survivable replication to wide area networks. Introducing Steward: Survivable Technology for Wide Area Replication.

  3. A Distributed Systems Service Model A site • Message-passing system. • Clients issue requests to servers, then wait for answers. • Replicated servers process the request, then provide answers to clients. Clients Server Replicas o o o N 1 2 3

  4. State Machine Replication • Main Challenge: Ensuring coordination between servers. • Requires agreement on the request to be processed and consistent order of requests. • Benign faults: Paxos [Lam98,Lam01]: must contact f+1 out of 2f+1 servers and uses 2 rounds to allow consistent progress. • Byzantine faults: BFT [CL99]: must contact 2f+1 out of 3f+1 servers and uses 3 rounds to allow consistent progress.

  5. A Replicated Server System • Maintaining consistent servers [Sch90] : • To tolerate f benign faults, 2f+1 servers are needed. • To tolerate f malicious faults: 3f+1 servers are needed. • Responding to read-only clients’ request [Sch90] : • If the servers support only benign faults: 1 answer is enough. • If the servers can be malicious: the client must wait for f +1 identical answers, f being the number of malicious servers.

  6. Peer Byzantine Replication Limitations • Construct consistent total order. • Limited scalability due to 3 round all-peer exchange. • Strong connectivity is required. • 2f+1 (out of 3f+1) to allow progress and f+1 to get an answer. • Partitions are a real issue. • Clients depend on remote information. • Bad news: Provably optimal. • We need to pay something to get something else.

  7. State of the Art in Byzantine ReplicationBFT [CL99] Baseline technology

  8. Evaluation Network 1: Symmetric Wide Area Network • Synthetic network used for analysis and understanding. • 5 sites, each of which connected to all other sites with equal latency links. • Each site has 4 replicas (except one site with 3 replicas due to current BFT setup). • Total – 19 replicas in the system. • Each wide area link has a 10Mbits/sec capacity. • Varied wide area latencies between 10ms - 400ms.

  9. BFT Wide Area Performance • Symmetric network. • 5 sites. • Total of 19 Replicas. • Almost out of the box BFT, which is a very good prototype software. • Results validated also by our new implementation. • Update only performance (no disk writes).

  10. Evaluation Network 2:Practical Wide-Area Network Boston MITPC • Based on a real experimental network (CAIRN). • Modeled in the Emulab facility. • Capacity of wide area links was modified to be 10Mbits/sec to better reflect current realities. • Results will not be shown today. Delaware 4.9 ms San Jose 9.81Mbits/sec UDELPC TISWPC 3.6 ms 1.42Mbits/sec ISEPC 1.4 ms 1.47Mbits/sec ISEPC3 100 Mb/s <1ms 38.8 ms 1.86Mbits/sec ISIPC4 Virginia ISIPC 100 Mb/s < 1ms Los Angeles

  11. Outline • Project goals. • Byzantine replication – current state of the art. • Steward – a new hierarchical approach. • Confining the malicious attack effects to the local site. • BFT-inspired protocol for the local area site. • Threshold Cryptography for trusted sites. • Fault tolerant replication for the wide area. • Initial thinking and snags. • A Paxos-based approach. • Putting it all together. • Evaluation. • Summary.

  12. Steward: Survivable Technology for Wide Area Replication A site Clients • Each site acts as a trusted logical unit that can crash or partition. • Effects of malicious faults are confined to the local site. • Between sites: • Fault-tolerant protocol between sites. • Alternatively – Byzantine protocols also between sites. • There is no free lunch – we pay with more hardware… Server Replicas o o o 3f+1 1 2 3

  13. Steward Architecture Local Site Clients Local area network Server Replica 1 Server Replica 2 Server Replica 3f+1 Local Area Byzantine Replication Local Area Byzantine Replication Local Area Byzantine Replication o o o Wide Area Fault Tolerant Replication Wide Area Fault Tolerant Replication Wide Area Fault Tolerant Replication Monitor Monitor Monitor Wide area representative Wide area standby Wide area standby Wide area network

  14. Outline • Project goals. • Byzantine replication – current state of the art. • Steward – a new hierarchical approach. • Confining the malicious attack effects to the local site. • BFT-inspired protocol for the local area site. • Threshold Cryptography for trusted sites. • Fault tolerant replication for the wide area. • Initial thinking and snags. • A Paxos-based approach. • Putting it all together. • Evaluation. • Summary.

  15. Constructing a Trusted Entity in the Local Site No trust between participants in a site: • A site acts as one unit that can only crash if the assumptions are met. Main ideas: • Use a BFT-like [CL99, YMVAD03] protocol to mask local Byzantine replicas. • Every update or acknowledgement from a site will need to go through some sort of agreement. • Use threshold cryptography to make sure local Byzantine replicas cannot misrepresent the site. • Every valid message going out of the site will need to first be signed using at least {f+1 out of n} threshold cryptography.

  16. Lessons Learned (1) • Vector HMACs vs Signatures: • BFT’s good performance in LAN is attributed also to the use of vector HMACs, facilitated by establishing pair-wise secret keys between local replicas. • Key decision: Use signatures, not HMACs. • Computing power trend works against using HMACs. • Signatures provide non repudiation, while HMACs do not. • Simplifying the protocol during view changes. • Vector HMAC is less scalable (mainly in terms of space). • Steward is designed for 5-10 years from now. • Not every message out requires a complete BFT invocation: • Acknowledgements require a much lighter protocol step.

  17. Lessons Learned (2) • {f+1 out of n} or {2f+1 out of n} threshold cryptography: • Performance tradeoff: • Need f+1 contributing replicas to mask effects of malicious behavior. Need 2f+1 to pass a Byzantine agreement. • Either use the last round of BFT and create {2f+1 out of n} signature, or add another round after BFT and create {f+1 out of n} signature. • A complete system requires a complete protocol: • Past research focus on the correctness of ordering, but not on issues such as generic reconciliation after network partitions and merges, flow control, etc. • The devil is in the details.

  18. Useful By-Product:Threshold Cryptography Library • We implemented a library providing support for generating Threshold RSA signatures. • Critical component of the Steward architecture. • Implementation is based on OpenSSL. • Can be used by any application requiring threshold digital signatures. • We plan to release it as open source. • Let us know if you are interested in such a library.

  19. Outline • Project goals. • Byzantine replication – current state of the art. • Steward – a new hierarchical approach. • Confining the malicious attack effects to the local site. • BFT-inspired protocol for the local area site. • Threshold Cryptography for trusted sites. • Fault tolerant replication for the wide area. • Initial thinking and snags. • A Paxos-based approach. • Putting it all together. • Evaluation. • Summary.

  20. Reg Prim Trans Prim Exchange States Non Prim Un No Construct Fault Tolerant Replication Engine Update (Green) Update (Yellow) update (Red) Trans Memb Reg Memb Reg Memb Trans Memb Last State Reg Memb 1a 1b ? 0 Update Reg Memb No Prim or Trans Memb Exchange Messages Last CPC Last CPC Recover Trans Memb Possible Prim [AT02]

  21. Fault Tolerant ReplicationThroughput Comparison (WAN) [ADMST02] Not Byzantine!!!!!

  22. The Paxos ProtocolNormal Case, after leader election [Lam98] Key: A simple end-to-end algorithm

  23. Lessons Learned (1) • Hierarchical architecture vastly reduces the number of messages sent on the wide area network: • Helps both in throughput and latency. • Using a fault tolerant protocol on the wide area network reduces the number of mandatory wide area crossings compared with a Byzantine protocol. • BFT-inspired protocols require 3 wide area crossings for updates generated at leader site, and 4 otherwise. • Paxos-based protocols require 2 wide area crossings for updates generated at leader site and 3 otherwise.

  24. Lessons Learned (2) • All protocol details have to be specified: • Paxos papers lack most of the details… • Base operation – specified reasonably well. • Leader election – completely unspecified. • Reconciliation – completely unspecified. • Also not specified (but that is ok) are practical considerations, such as retransmission handling and flow control. • The view change / leader election is the most important part, consistency wise: • Determines the liveness criteria of the overall system.

  25. Example: Liveness Criteria • Strong L1: • If there exists a time after which there is always some set of running, connected servers S, where |S| is at least a majority, then if a server in the set initiates an update, some member of the set eventually executes the update. • L1: • If there exists a set consisting of a majority of servers, and a time after which the set does not experience any communication or process failures, then if a server in the set initiates an update, some member of the set eventually executes the update. • Weak L1: • If there exists a set consisting of a majority of servers, and a time after which the set does not experience any communication or process failures, AND the members of the set do not hear from any members outside of the set, then if a server in the set initiates an update, some member of the set eventually executes the update.

  26. What’s the difference? • Strong L1: • Allows any majority set • Membership of the set can change rapidly, as long as cardinality remains at least a majority. • L1: • Requires a stable majority set, but others (beyond the majority) can come and go. • Weak L1: • Requires a stable, isolatedmajority set.

  27. Outline • Project goals. • Byzantine replication – current state of the art. • Steward – a new hierarchical approach. • Confining the malicious attack effects to the local site. • BFT-inspired protocol for the local area site. • Threshold Cryptography for trusted sites. • Fault tolerant replication for the wide area. • Initial thinking and snags. • A Paxos-based approach. • Putting it all together. • Evaluation. • Summary.

  28. Steward Architecture Local Site Clients Local area network Server Replica 1 Server Replica 2 Server Replica 3f+1 Local Area Byzantine Replication Local Area Byzantine Replication Local Area Byzantine Replication o o o Wide Area Fault Tolerant Replication Wide Area Fault Tolerant Replication Wide Area Fault Tolerant Replication Monitor Monitor Monitor Wide area representative Wide area standby Wide area standby Wide area network

  29. Testing Environment • Platform: Dual Intel Xeon CPU 3.2 GHz 64 bits 1 GByte RAM, Linux Fedora Core 3. • Library relies on Openssl : • Used OpenSSL 0.9.7a 19 Feb 2003. • Baseline operations: • RSA 1024-bits sign: 1.3 ms, verify: 0.07 ms. • Perform modular exponentiation 1024 bits, ~1 ms. • Generate a 1024 bits RSA key ~55ms.

  30. Steward Expected Performance • Symmetric network. • 5 sites. • 16 replicas per site. • Total of 80 replicas. • Methodology: Measuring time in a site, and then running wide area protocol between 5 entities. Each entity performs busy-wait equal (conservatively) to the cost of the local site algorithm, including threshold cryptography. • Actual computers: 16 on local area, and then, separately, 5 on wide area. • Update only performance (no disk writes).

  31. Steward Measured Performance • Symmetric network. • 5 sites. • 16 replicas per site. • Total of 80 replicas. • Methodology: Leader site has 16 replicas. Each other site has 1 entity that performs busy-wait equal (conservatively) to the cost of a receiver site reply, including threshold cryptography. • Actual computers: 20. • Update only performance (no disk writes).

  32. Head to Head Comparison (1) • Symmetric network. • 5 sites. • 50ms distance between each site. • 16 replicas per site. • Total of 80 replicas. • BFT broke after 6 clients. • SRS goal: Factor of 3 improvement in latency. • Self imposed goal: Factor of 3 improvement in throughput. • Bottom line: Both goals are met once system has more than one client, and considerably excided thereafter.

  33. Head to Head Zoom (1) • Symmetric network. • 5 sites. • 50ms distance between each site. • 16 replicas per site. • Total of 80 replicas. • BFT broke after 6 clients. • SRS goal: Factor of 3 improvement in latency. • Self imposed goal: Factor of 3 improvement in throughput. • Bottom line: Both goals are met once system has more than one client, and considerably excided thereafter.

  34. Head to Head Comparison (2) • Symmetric network. • 5 sites. • 100ms distance between each site. • 16 replicas per site. • Total of 80 replicas. • BFT broke after 7 clients. • SRS goal: Factor of 3 improvement in latency. • Self imposed goal: Factor of 3 improvement in throughput. • Bottom line: Both goals are met once system has one client per site, and considerably excided thereafter.

  35. Head to Head Zoom (2) • Symmetric network. • 5 sites. • 100ms distance between each site. • 16 replicas per site. • Total of 80 replicas. • BFT broke after 7 clients. • SRS goal: Factor of 3 improvement in latency. • Self imposed goal: Factor of 3 improvement in throughput. • Bottom line: Both goals are met once system has one client per site, and considerably excided thereafter.

  36. Factoring Queries In • So far, we only considered updates. • Worst case scenario from our perspective. • How to factor queries into the game? • Best answer: Just measure, but we had no time to build the necessary infrastructure and measure. • Best answer for now: make a conservative prediction. • Steward: • A query is answered locally after an {f+1 out of n} Threshold Cryptography operation.Cost: ~11ms. • BFT: • A query requires at least some remote answers in this setup.Cost: at least 100ms (for 50ms network), 200ms (for 100ms network). • We could change the setup to include 6 local members in each site (for a total of 30 replicas). That will allow a local answer in BFT with a query cost similar to Steward, but then BFT performance will basically collapse on the updates. • Bottom line prediction: • Both goals will be met once the system has more than one client, and will be considerably exceeded thereafter.

  37. Scalability, Accountability and Instant Information Access for Network-Centric Warfare New ideas First scalable wide-area intrusion-tolerant replication architecture. Providing accountability for authorized but malicious client updates. Exploiting update semantics to provide instant and consistent information access. Impact Schedule Resulting systems with at least 3 times higher throughput, lower latency and high availability for updates over wide area networks. Clear path for technology transitions into Military C3I systems such as the Army Future Combat System. System integration & evaluation Component analysis & design Comp. eval. Component Implement. C3I model, baseline and demo Final C3I demo and baseline eval June 04 Dec 04 June05 Dec 05 http://www.cnds.jhu.edu/funding/srs/

More Related