1 / 16

A Key Management Scheme for Wireless Sensor Networks Using Deployment Knowledge

A Key Management Scheme for Wireless Sensor Networks Using Deployment Knowledge. Presenter: Todd Fielder. Key Agreement Schemes. Trusted Server Requires trusted infrastructure Self-Enforcing Asymmetric cryptography Pre-Distribution Key information is pre-distributed prior to deployment

Download Presentation

A Key Management Scheme for Wireless Sensor Networks Using Deployment Knowledge

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Key Management Scheme for Wireless Sensor Networks Using Deployment Knowledge Presenter: Todd Fielder

  2. Key Agreement Schemes • Trusted Server • Requires trusted infrastructure • Self-Enforcing • Asymmetric cryptography • Pre-Distribution • Key information is pre-distributed prior to deployment • In sensor networks, only a small portion of the keys are predistributed.

  3. Key Pre-distribution • Use only a subset of keys within the network and probabilistically guarantee a connected graph dependent on node density • Not all nodes will be connected • Possible to increase this probability and connected nodes if deployment knowledge is used • Nodes will be deployed in some order. • i.e. there is a higher probability that a node deployed at time t we be closer to other nodes deployed at time t than to nodes deployed at time (t+1).

  4. Definitions and Assumptions • Static Nodes • Deployment is evenly distributed through region. • Is this a safe assumption? • Deployment Point • Point location where a node may be deployed • May reside in an area around deployment point which is defined by a probability density function (pdf). • i.e. the helicopter where the node is dropped from • Resident Point • Point near the deployment point where sensor actually resides. • i.e. where the node lands.

  5. Group-Based Deployment Model • Group of sensors are deployed at a single deployment point. • Increases the pdf with a group • Decreases the pdf between groups. • For a uniform distribution policy, there is no knowledge about which nodes will be neighbors • Requires a larger key pool. • Decreases probability of sharing keys. • This research distributes nodes uniformly in a 2X2 grid.

  6. Protocol • Key Pre-Distribution • Global key pool, S, is divided into t*n (number of groups) number of key pools. • Goal is to allow nearby key pools S i, j to share keys with a neighboring group Si+1, j. • Each node contains a subset m of their groups key pool.

  7. Phases 2 & 3 • Shared key Discovery • Broadcast indices of keys. • Setup secure links with neighbors. • Path Key Establishment • Use previously established secure channels to setup keys with unconnected neighbors. • Allows intermediate nodes to determine keys. • Problem: Intermediate nodes may be compromised, choose a key known by attacker. • Probability of securing a link between nodes over three hops is close to one. • Requires communication overhead • Between nodes • To determine who is choosing the key

  8. Setting up Key Pools • Horizontally or vertically neighboring key pools share (0<a<.25) Sc keys2. • Diagonal neighbors share (0<b<.25) Sc keys • 4a + 4b = 1 • A and B are the over-lapping factors and define the amount of keys shared by neighboring groups. • Non-neighboring groups share no keys.

  9. Determining Overlapping Factors • A determines shared values between horizontal/vertical neighbors. • Connectivity (100)= .68 • B determines shared keys with diagonal neighbors. • Connectivity (100) = .48

  10. Key Pool Size • Group S1,1 chooses Sc from S, then removes those keys • For each cell S1,j, for j=2…n, pick a*(Sc) keys from S1,j-1. Then pick (1-a)*(Sc) from pool. • Repeat for each row Si,j, also picking b*(Sc) keys from Si-1,j-1. • Flaw: There is no guarantee that a key will not percolate from one grid to the next if node (j+1) can pick arbitrary keys from j. • Causes nodes to share keys.

  11. Experimental Setup • S = 100,000; a=.167; b=.083. • Number of nodes = 10,000 • Deployment area = 1000m X 1000m • t=n=10 • Grid size = t X n = 100m • Group size = number of nodes / #grids • 100 nodes per group • Communication Range (R) = 40m • Sc = 1770 (for each group)

  12. Evaluation • Local Connectivity: Probability that two neighboring nodes share a key. • M: number of keys

  13. Evaluation cont. • Global Connectivity: relation between size of isolated components and size of graph. • Excludes nodes outside of communication range since this is due to deployment and not key-distribution.

  14. Communication Overhead • As number of keys increase in memory, communication required decreases.

  15. Point of Uncertainty • If each group shares only 1770 keys, a lot of keys are reused unnecessarily. • 100 nodes per group * 100 keys per node. • Do we need 100 keys per group? • Is group connectivity guaranteed to be 100%?

  16. Questions???

More Related