220 likes | 260 Views
Explore the Square Root law for covert communication with limited detection probability on AWGN channels. Learn about the challenges and strategies in maintaining low detection capabilities while communicating securely. Discover the fundamental limits and how to achieve covert communication successfully.
E N D
Square Root Law for Communication with Low Probability of Detection on AWGN Channels Boulat A. Bash Dennis Goeckel Don Towsley
Introduction • Problem: communicate so that adversary’s detection capability is limited to tolerable level • Low probability of detection (LPD) communication • As opposed to protecting message content (encryption) • Why? Lots of applications… • Communication looks suspicious • “Camouflage” military operations • etc… • Fundamental limits of LPD communication
Scenario • Alice uses radio to covertly communicate with Bob • They share a secret (codebook) • Willie attempts to detect if Alice is talking to Bob • Willie is passive, doesn’t actively jam Alice’s channel • Willie’s problem: detect Alice • Alice’s problem: limit Willie’s detection schemes • Bob’s problem: decode Alice’s message
Scenario • Alice uses radio to covertly communicate with Bob • They share a secret (codebook) • Willie attempts to detect if Alice is talking to Bob • Willie is passive, doesn’t actively jam Alice’s channel • Willie’s problem: detect Alice • Alice’s problem: limit Willie’s detection schemes • Bob’s problem: decode Alice’s message
or ? Scenario • Alice uses radio to covertly communicate with Bob • They share a secret (codebook) • Willie attempts to detect if Alice is talking to Bob • Willie is passive, doesn’t actively jam Alice’s channel • Willie’s problem: detect Alice • Alice’s problem: limit Willie’s detection schemes • Bob’s problem: decode Alice’s message Thanks!
Main Result: The Square Root Law • Given that Alice has to tolerate some risk of being detected, how many bits can Alice covertly send to Bob? • Not many: bits per n channel uses • If she sends bits in n channel uses, either Willie detects her, or Bob is subject to decoding errors • Intuition: Alice has to “softly whisper” to reduce detection, which hurts how much she can send
Outline • Introduction • Channel model • Hypothesis testing • Achievability • Converse • Conclusion
Channel Model transmit decode i.i.d. Decide: i.i.d. is or something else?
Statistical Hypothesis Testing • Willie has n observations of Alice’s channel and attempts to classify them as noise or covert data • Null hypothesis H0: observations are noise • Alternate H1: Alice sending covert signals Willie’s test decision P(false alarm) Noise (H0) Data (H1) 1-a a is quiet (H0) Alice x-mitting (H1) b 1-b P(miss) P(detection)
Willie’s Detector Detector ROC • Willie picks (confidence in his detector) • Willie uses a detector that maximizes • Alice can lower-bound • Picks appropriate distribution for covert symbols 1 and 0 1
Achievability • Alice can send bits in n channel uses to Bob while maintaining at Willie’s detector for any • Willie’s channel to Alice • Three step proof • Construction • Analysis of Willie’s detector • Analysis of Bob’s decoding error Willie’s Detector ROC 1 1-b a 0 1
W1 W2M W2 0 0 1 0 0 1 0 1 0 ··· ··· ··· 0 1 0 1 0 1 x2M1 x2M2 x2M3 ··· x2Mn c(W2M ) Each symbol i.i.d. Achievability: Construction • Random codebook with average symbol power • Codebook revealed to Bob, but not to Willie • Willie knows how codebook is constructed, as well as n and • System obeys Kerckhoffs’s Law: all security is in the key used to construct codebook M-bit messages n-symbol codewords x11 x12 x13 ··· x1n c(W1) x21 x22 x23 ··· x2n c(W2 ) 2M ⁞ ⁞
Achievability: Analysis of Willie’s Detector • Joint distributions for Willie’s n observations: • when Alice quiet, since AWGN is i.i.d • . when Alice transmitting, since Willie does not know Alice and Bob’s codebook • Bounding Willie’s detection: Total variation or ½L1 norm Relative entropy Taylor series expansion
Error if is not here Achievability: Analysis of Bob’s Decoding Error • Bob uses ML decoding to decode from • Therefore, Bob gets bits per n channel uses another codeword is closer
Relationship with Steganography • Steganography: embed messages into covertext • Bob and Willie then see noiseless stegotext • Square root law in steganography • Ker, Fridrich, et al • symbols can safely be modified in covertext of size n • Similarity due to hypothesis testing math • bits can be embedded • Due to noiseless “channel”
Outline • Introduction • Channel model • Hypothesis testing • Achievability • Converse • Conclusion
Converse • When Alice tries to transmit bits in n channel uses, using arbitrary codebook, either • Detected by Willie with arbitrarily low error probability • Bob’s decoding error probability bounded away from zero • Arbitrary codebook with codewords of length n • Willie oblivious to design of Alice’s system • Two step proof: • Willie detects arbitrary codewords with average symbol power using a simple power detector • Bob cannot decode codewords that carry bits with average symbol power with arbitrary low error
Converse: Willie’s Hypothesis Test • Willie collects n independent readings of his channel to Alice: • Interested in hypothesis test: • Test statistic: average received symbol power • Test implementation: pick some threshold t • Accept H0 if • Reject H0 if
Converse: Analysis • Probability of false alarm • To obtain set • Probability of a missed detection • When , Alice doesn’t transmit Alice transmits
Converse: Alice Using Low Power Codewords • Suppose Alice uses positive fraction of codewords with average symbol power • Then Willie can’t drive detection errors to zero • Analyze Bob’s decoding error: • Converse of Shannon Theorem • By sending bits in n channel uses rate at too low power • and, therefore, Bob’s decoding error Alice’s codebook
Conclusion • We proved a square root law for LPD channel • Future work • Key efficiency • Can show that length K of Alice and Bob’s shared secret • Open problem: can it be linear ? • Covert networks
Thank you! boulat@cs.umass.edu