130 likes | 274 Views
Covert Channels and Anonymizing Networks. Ira S. Moskowitz --- NRL Richard E. Newman --- UF Daniel P. Crepeau --- NRL Allen R. Miller --- just hanging out. Motivation. Anonymity --- What do you think/say? optional desire or mandated necessity
E N D
Covert Channels and Anonymizing Networks Ira S. Moskowitz --- NRL Richard E. Newman --- UF Daniel P. Crepeau --- NRL Allen R. Miller --- just hanging out
Motivation Anonymity --- What do you think/say? optional desire or mandated necessity Our interest is in hiding who is sending what to whom. Yet, even if we have this type of “anonymity” one might still be able to leak info. Is this from a failure to truly obtain anonymity, or is it an inherent flaw in the model/design?
Covert Channels The information is leaked via a covert channel (which is …) Paranoid threat? Yes, but .... This paper is a first step (for us) in tying anonymity and covert channels together.
MIXes A MIX is a device intended to hide source/message/destination associations. A MIX can use crypto, delay, shuffling, padding, etc. to accomplish this. Others have studied ways to “beat the MIX” --active attacks to flush the MIX. --passive attacks may study probabilities. You all know this better than I :-)
covert channel Our Scenario MIX Firewalls separating 2 enclaves. Eve Enclave 2 Enclave 1 Alice & Cluelessi overt channel --- anonymous Timed MIX, total flush per tick Eve: counts # message per tick – perfect sync, knows # Cluelessi Cluelessi are IID, p = probability that Cluelessi does not send a message Alice is clueless w.r.t to Cluelessi
Toy Scenario – only Clueless1 Alice can: not send a message (0), or send (0c) Only two input symbols to the (covert) channel What does Eve see? {0,1,2} 0 p 0 q Eve 1 Alice p 0c q 2
anonymizing network X Y Discrete Memoryless Channel Y A is the random variable representing Alice, the transmitter to the cc X has a prob dist P(X=0) = x P(X=0c) = 1-x Y represents Eve prob dist derived from A and channel matrix X
In general P(X = xi) = p(xi), similarly p(yk) H(X) = -∑i p(xi)log[p(xi)] Entropy of X H(X|Y) = -∑kp(yk) ∑ip(xi|yk)log[p(xi|yk)] Mutual information I(X,Y) = H(X) – H(Y|X) = H(Y)-H(Y|X) (we use the latter) Capacity is the maximum over dist X of I For toy scenario C = max x{ -( pxlogpx +[qx+p(1-x)]log[qx+p(1-x)] +q(1-x)logq(1-x) ) –h(p) } where h(p) = -{ p logp + (1-p) log(1-p) }
General Scenario N Cluelessi 0 pN NpN-1q 0 1 . . . pN qN NqN-1p N 0c qN N+1
Conclusions • Highest capacity when very low or very high clueless traffic • Capacity (of p) bounded below by C(0.5) • Capacity monotonically decreases to 0 with N • C(p) is a continuous function of p • Alice’s optimal bias is function of p, and is always near 0.5
Future Work • One MIX firewall –distinguishable receivers • Relax IID assumption on Cluelessi • If Alice has knowledge of Cluelessi behavior…