1 / 56

Presented by Raghu Kiran Ganti For CS 851

A Distributed Algorithm for Managing Multi-target Identities in Wireless Ad-hoc Sensor Networks Jaewon Shin, Leonidas J. Guibas, and Feng Zhao. Presented by Raghu Kiran Ganti For CS 851. Tracking Problem. Outline of talk. Introduction Mathematical formulation of problem

jed
Download Presentation

Presented by Raghu Kiran Ganti For CS 851

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Distributed Algorithm for Managing Multi-target Identities in Wireless Ad-hoc Sensor NetworksJaewon Shin, Leonidas J. Guibas, and Feng Zhao Presented by Raghu Kiran Ganti For CS 851

  2. Tracking Problem

  3. Outline of talk • Introduction • Mathematical formulation of problem • Multi-target identity update • Implementation on Wireless Adhoc Sensor Networks (WASN) • Evaluation • Discussion & Conclusions

  4. Introduction • Mathematical formulation of problem • Multi-target identity update • Implementation on Wireless Adhoc Sensor Networks (WASN) • Evaluation • Discussion & Conclusions

  5. Introduction • Problem statement: Distributed multi-target identity management. Maintain information about who is who over time given targets’ position estimates. • Difficulty- exponential complexities in associating target position estimates with target identities.

  6. Related Work • An Algorithm for Tracking Multiple Targets (1979!) – IEEE Transactions on Automatic Control. • Develops a method for calculating probabilities. • Primarily uses Bayesian formulation for determining probabilities. • Not related to WASN, but ideas are very similar => tracking is not old problem!

  7. Contributions of the paper • Main contribution – A rigorous mathematical framework for solving Multi-target identity management problem. • A new distributed representation for maintaining identity information. • A distributed algorithm for updating the belief matrix which is O(N2) complexity. • Estimate non-local parameters of a physical phenomenon using local information – how feasible is it?

  8. Some new concepts • Identity Belief Matrix: Matrix to maintain identity information of each target. Ex: represents a belief matrix => target 1 at position 1 with probability 1, etc. • Mixing matrix: Matrix used to update local belief matrix. Ex: => m12 is probability that target came from position 1 at time k-1 to position 2 at time k. 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1

  9. A Little Mathematics • Doubly stochastic matrix: A matrix A = (aij) such that aij≥ 0 and • Statistical Entropy: A measure or variation defined on the probability distribution of observed events. Specifically, if P is the probability of an event a, the entropy H(A) for all events a in A is:

  10. Introduction • Mathematical formulation of problem • Multi-target identity update • Implementation on Wireless Adhoc Sensor Networks (WASN) • Evaluation • Discussion & Conclusions

  11. A Mathematical Framework- Target Configurations Crowded (COHU) Sparse (COLU) Sparse (COLU)

  12. Target configurations • COLU- Configuration Of Low Uncertainty. • COHU- Configuration Of High Uncertainty. • Actions/Decisions in COHU more uncertain than those of COLU. • Let I = {1,2,…..,N} be target identities and be the position estimates of N targets at time k.

  13. Identity Mass Flow • Identity management- compute correct permutation of I given X(k). • Maintain all possible permutations?- Exponential complexity! • Proposed idea- Identity Mass Flow (IMF). • IMF- The mass associated with identity flows from X(k-1) to X(k) in whole or partially.

  14. New measurement Current state New measurement t = k+1 t = k Current state t = k+1 t = k

  15. New measurement Current state t = k+1 New measurement t = k Current state t = k+1 p 1-p t = k Mass is neither created nor destroyed- Conservation of Mass

  16. t = k+1 New measurement t = k Current state a+b = 1 New measurement t = k+1 a b t = k Current state Sum of all masses arriving at xi(k) is one

  17. Definitions • The identity belief matrix B(k) is a N × N doubly stochastic matrix, whose entry bij(k) represents the amount of identity mass from iєI that arrives at xj(k). The jth column bj(k) of B(k) is called the identity belief vector of xj(k). where bi(k) = p(xi(k))’s ID is 1 p(xi(k))’s ID is 2 . . . P(xi(k))’s ID is N

  18. Definitions • The mixing matrix M(k) is a N × N doubly stochastic matrix, whose entry mij(k) represents the probability of xj(k) begin originated from xi(k-1), and is statistically independent with M(l) for all l ≠ k. • Theorem 1: Let B(k) and M(k) be the identity belief matrix and the mixing matrix at time k as in the above 2 definitions, then: B(k+1) = B(k)M(k+1) • Matrix multiplication – compute intensive!

  19. t = k-1 t = k t = k+1 M(k) M(k-1) β α 1-α 1-α α α 1 0 β α 1-α + (1-β) 1-β 1-α α 1-α 1-α α 0 1 1-α α α (1-β) + β β B(k-1) B(k) B(k+1) Updating B(k) using M(k)

  20. Uncertainty change in system • Lemma 1: Let πB(k) be the probability mass function over all the possible identity associations in B(k), then H(πB(k)) ≥ H(πB(k-1)) where H(.) is the statistical entropy of a probability mass function. • The proof follows from strict concavity of the entropy function.

  21. Uncertainty- How does it affect? • Lemma 1 => Uncertainty will grow until every identity association becomes equally like without any additional information. • Uncertainty will be less in a COLU configuration. • Proper use of local information could make uncertainty less in the IMF formulation.

  22. Computing Mixing Matrix M(k) • M(k) – collection of marginal association probabilities, can be computed from joint association probability, BUT computing is exponential. • Proposed solution – Compute a matrix L(k), such that l(i,j) = p(s(k) = |xj(k) – xi(k-1)|/ΔT); where s(k) indicates the speed information of the target. Since M(k) is doubly-stochastic, L(k) must be transformed into a doubly-stochastic matrix.

  23. Computing Mixing Matrix • Done by an Iterative Scaling Algorithm – Presented later. • Issue to be addressed: How are the speeds of the targets calculated? Is it possible to estimate speed of targets correctly?

  24. Introduction • Mathematical formulation of problem • Multi-target identity update • Implementation on Wireless Adhoc Sensor Networks (WASN) • Evaluation • Discussion & Conclusions

  25. Multi-target identity Update • Local information must be used for updating identity belief matrix. • IMF approach – natural setting for exploiting local services.

  26. 0 • 0 1 .9 .1 .1 .9 .5 .5 .5 .5 .5 .5 .5 .5 0 1 1 0 Crowded (COHU) Sparse (COLU) Sparse (COLU) .1 .9 .9 .1

  27. 0 • 0 1 .9 .1 .1 .9 .9 .1 .1 .9 0 1 1 0 .1 .9 .9 .1 Sparse (COLU) Sparse (COLU) Sparse (COLU) .1 .9 .9 .1

  28. Bayesian Normalization • Above 2 examples – 2 target case. Given 1 target’s probability, other target’s probability can be calculated. (Doubly stochastic) • For N target case – Use Bayesian Normalization. • Desirable properties of a perfect solution achieved by computing B(k) as Bayesian posterior belief distribution.

  29. Bayesian Normalization • Bayesian posterior belief distribution: Use a priori belief and local data to get the new / posterior belief. • Assume joint probability distribution over all possible N! associations at time k is available for all k. • Define to be set of indices associated with | π(ki)| > 2, where ki is ith element in K.

  30. Bayesian Normalization • Ri’s are sequence of random variables associated with π(ki), which takes values j є {1,………, | π(ki)|} with probability of πj(ki). • A specific permutation is chosen according to the value of Ri with some probability. • Single identity association – A point ε in the joint event space with |S| = Πi|π(ui)|.

  31. Bayesian Normalization • The probability of the above event is: p((R1,…,R|K|) = ε) = p(R1=ε1)…p(R|K|= ε|K|) • Given a local evidence L, the posterior belief matrix B(k) can be calculated. • Theorem 2: Let Eij(k) be the subset of S satisfying ID(xj(k)) = i and L be the subset of S satisfying the local observation, then

  32. Bayesian Normalization • Lemma 2: The local observation L does not increase the entropies of the columns i.e. • Theorem 3: Let bpq(k) be the entry that becomes 1 from local evidence L, then the columns with zero at pth entry and the rows at qth entry do not change. • Theorem 3 => lesser number of updates for the belief matrix. • Bayesian normalization not practical – introduce Iterative Scaling.

  33. Something wrong? Normalize rows Normalize columns Check for termination condition Iterative Scaling B := A; B_old := A; for k = 1 to maximum_number_of_iteration for i = 1 to number_of_row row_sum := 0; for k = 1 to number_of_column row_sum := row_sum + B(i,k); end for j = 1 to number_of_column B(i,j) := B(i,j)/row_sum; end end for i = 1 to number_of_column column_sum := 0; for k = 1 to number_of_row column_sum := column_sum + B(i,k); end for j = 1 to number_of_row B(j,i) := B(j,i)/column_sum; end end if |B - B_old| < error terminate; end B_old := B; end Paper states: We present a version of the Iterative Scaling algorithm to achieve a doubly-stochastic matrix A given a N × N non-negative matrix B.

  34. Iterative Scaling • Divide each element in ith row/column by the sum of that row/column. • Repeat the normalization till error margin is small. • Empirical observations: • Algorithm converges to a unique doubly-stochastic matrix given an initial matrix. • The ordering of row/column normalization does not affect the convergence. • The total number of iterations are not affected by the size of the matrix.

  35. log|B(n)-B(n-1)| Number of Iterations Convergence of iterative scaling

  36. Iterative Scaling • Convergence rate affected by how different the initial matrix is from being a doubly-stochastic matrix. • Larger matrices are closer to being doubly-stochastic than smaller ones. • Q – First iteration gives a doubly-stochastic matrix, why do we need so many iterations? • Q – How does it do a correct update of B(k)?

  37. Introduction • Mathematical formulation of problem • Multi-target identity update • Implementation on Wireless Adhoc Sensor Networks (WASN) • Evaluation • Discussion & Conclusions

  38. Implementation on WASN • Quantity to be distributed – Belief matrix B(k). • Small number of nodes called leaders are active and responsible for maintaining / updating the information of interest. • When leaders are no longer “good” for information gathering, select a new leader and handoff the information. • Each leader maintains B(k) and its position estimate xi(k).

  39. Implementation on WASN • When a leader observes local evidence about the ID of a target, then it needs to send the information to other leaders to allow them to update their belief matrices. • Question – Above communication is multicast, no mention in paper of how to achieve it! They assume that they have a good group management protocol for this purpose.

  40. Wake-up Y Do sensing Y N Multiple measurements Mixing matrix available? N Identity Sensing good Enough? Each meas has unique leader? N Y Y Y N Initialize normalization Ignore the other meas Compute M & send info to other leaders. Update B & select next leader & hand off data Leader? N Flowchart for multi-target identity management Sleep

  41. 1 1 1 1 1 1 1 1 Example – Single Target

  42. 0 • 0 1 .9 .1 .1 .9 .9 .1 .1 .9 0 1 1 0 .1 .9 .9 .1 .1 .9 .9 .1 Example – Multiple targets

  43. Introduction • Mathematical formulation of problem • Multi-target identity update • Implementation on Wireless Adhoc Sensor Networks (WASN) • Evaluation • Discussion & Conclusions

  44. Evaluation • Simulations are used, no real experiment setup. • Initial leaders are selected manually. • Next leader is selected based on its geographical position?? • Each node has a signal processing module for signal classification. • Localization is achieved and nodes know other nodes relative positions.

  45. Target 1 Target 2 Target 3 Target 4 Simulation Example – At t = 0

  46. Target 4 Target 3 Target 2 Target 1 Simulation Example – At t = 7

  47. Target 4 Target 3 Target 2 Target 1 Simulation Example – At t = 10

  48. Introduction • Mathematical formulation of problem • Multi-target identity update • Implementation on Wireless Adhoc Sensor Networks (WASN) • Evaluation • Discussion & Conclusions

  49. Latest Work • Latest works – Tracking a Moving Object with a Binary Sensor Network. (Sensys ’03) • Claim: With 1 bit of information from a sensor node, we can track objects! 1 bit conveys whether object moves away/towards a node. • Uses geometry to determine target. • Tracking algorithm based on particle filtering method – represent the location density function as a set of random points which are updated based on sensor readings.

  50. X β α Si Sj Geometry of object moving away from Sj and towards Si

More Related