170 likes | 565 Views
Probability Theory: Bayes’ theorem Perfect secrecy: definition, some proofs, examples: one-time pad; simple secret sharing Entropy: Definition, Huffman coding property, unicity distance. Information-Theoretic Secrecy. CSCI381 Fall 2005 GWU Reference: Stinson. Bayes’ Theorem.
E N D
Probability Theory: Bayes’ theorem • Perfect secrecy: definition, some proofs, examples: one-time pad; simple secret sharing • Entropy: Definition, Huffman coding property, unicity distance Information-Theoretic Secrecy CSCI381 Fall 2005 GWU Reference: Stinson
Bayes’ Theorem If Pr[y] > 0 then Pr[x|y] = Pr[x]Pr[y|x]/ xXPr[x]Pr[y|x] What is the probability that the 1st dice throw is 2 when the sum of two dice throws is 5? What is the probability that the 2nd dice throw is 3 when the product of the two dice throws is: 6, and 5? CS284/Spring04/GWU/Vora/Shannon Secrecy
(Im)Perfect Secrecy: Example P = {1, 2, 3} K = {K1, K2, K3} and C = {2, 3, 4, 5, 6} Keys chosen equiprobably Pr[1] = Pr[2] = Pr[3] = 1/3 Pr[c=3] = ? Pr[m|c=3] = ? Pr[k|c=3] = ? CS284/Spring04/GWU/Vora/Shannon Secrecy
(Im)Perfect Secrecy: Example How should the above ciphers be changed to improve the cryptosystem? What defines a good cryptosystem? CS284/Spring04/GWU/Vora/Shannon Secrecy
(Im)Perfect Secrecy: ExampleLatin Square Assume all keys and messages equiprobable What’s good about this? P(k|c) P(m|c) CS284/Spring04/GWU/Vora/Shannon Secrecy
Perfect Secrecy: Definition A cryptosystem has perfect secrecy if Pr[x|y] = Pr[x] xP, yC a posteriori probability = a priori probability posterior = prior CS284/Spring04/GWU/Vora/Shannon Secrecy
Example: one-time pad P = C = Z2n dK=eK(x1, x2, …xn) = (x1+K1, x2+K2, …xn+Kn) mod 2 Show that it provides perfect secrecy CS284/Spring04/GWU/Vora/Shannon Secrecy
Some proofs: Thm. 2.4 Thm 2.4: Suppose (P, C, K, E, D) is a cryptosystem where |K| = |P| = |C|. Then the cryptosystem provides perfect secrecy if and only if every key is used with equal probability 1/|K|, and x Pand y C, there is a unique key K such that eK(x) = y (eg: Latin square) CS284/Spring04/GWU/Vora/Shannon Secrecy
Entropy H(X) = - pi log2 pi Example: pi = 1/n Examples: ciphertext and plaintext entropies for examples. CS284/Spring04/GWU/Vora/Shannon Secrecy
Huffman encoding f: X* {0, 1}* String of random variables to string of bits e.g. X = {a, b, c, d} f(a) = 1, f(b) = 10, f(c) = 100, f(d) = 1000 CS284/Spring04/GWU/Vora/Shannon Secrecy
Huffman encoding algorithm X = {a, b, c, d, e} p(a) = 0.05 p(b) = 0.1 p(c) = 0.12 p(d) = 0.13 p(e) = 0.6 a: 000, b: 001, c: 010, d: 011, e: 1 Average length = ? Entropy = ? CS284/Spring04/GWU/Vora/Shannon Secrecy
Theorem H(X) average length of Huffman encoding H(X) + 1 Without proof CS284/Spring04/GWU/Vora/Shannon Secrecy
Properties of Entropy • H(X) log2 n • H(X, Y) H(X) + H(Y) • H(X, Y) = H(X) + H(Y|X) = H(Y) + H(X|Y) Where H(X|Y) = - x y p(y)p(x|y)log2p(x|y) • H(X|Y) H(X) With proofs and examples CS284/Spring04/GWU/Vora/Shannon Secrecy
Theorem H(K|C) = H(K) + H(P) – H(C) Examples: Previous imperfect squares Proof: H(K, P, C) = H(K, C) = H(K, P) CS284/Spring04/GWU/Vora/Shannon Secrecy
Language Entropy and Redudancy HL = Lim n H(Pn ) /n (lies between 1 and 1.5 for English) RL = 1 – HL /log2 |P| (the amount of “space” in a letter of English for other information) Need, on average, about n ciphertext characters to break a substitution cipher where: n = key entropy / RL log2 |P| n is “unicity distance” of cryptosystem CS284/Spring04/GWU/Vora/Shannon Secrecy
Proof • H(K|Cn) = H(K) + H(Pn) – H(Cn) CS284/Spring04/GWU/Vora/Shannon Secrecy