340 likes | 550 Views
Shannon ’ s theory part II. Ref. Cryptography: theory and practice Douglas R. Stinson. Shannon ’ s theory. 1949, “ Communication theory of Secrecy Systems ” in Bell Systems Tech. Journal. Two issues:
E N D
Shannon’s theory part II Ref. Cryptography: theory and practice Douglas R. Stinson
Shannon’s theory • 1949, “Communication theory of Secrecy Systems” in Bell Systems Tech. Journal. • Two issues: • What is the concept of perfect secrecy? Does there any cryptosystem provide perfect secrecy? • It is possible when a key is used for only one encryption • How to evaluate a cryptosystem when many plaintexts are encrypted using the same key?
Perfect secrecy • Definition: A cryptosystem has perfect secrecy if Pr[x|y] = Pr[x] for allxP, yC • Idea: Oscar can obtain no information about the plaintext by observing the ciphertext Bob Alice y x Oscar Pr[Head]=1/2 Pr[Tail]=1/2 Case 1: Pr[Head |y]=1/2 Pr[Tail |y]=1/2 Case 2: Pr[Head |y]=1 Pr[Tail |y]=0
Perfect secrecy when |K|=|C|=|P| P: 010 K: 101 C: 111 ? P: 111 K: 000 • (P,C,K,E,D) is a cryptosystem where |K|=|C|=|P|. • The cryptosystem provides perfect secrecy iff • every keys is used with equal probability 1/|K| • For every xP, yC, there is a unique key K such that • Ex. One-time pad in Z2
Outline • Introduction • One-time pad • Elementary probability theory • Perfect secrecy • Entropy • Properties of entropy • Spurious keys and unicity distance • Product system
Preview (1) • We want to know: the average amount of ciphertext required for an opponent to be able to uniquely compute the key, given enough computing time Ciphertext Plaintext yn xn K
Preview (2) • That is, we want to know: How much information about the key is revealed by the ciphertext = conditional entropy H(K|Cn) • We need the tools of entropy
Entropy (1) • Suppose we have a discrete random variable X • What is the information gained by the outcome of an experiment? • Ex. Let X represent the toss of a coin, Pr[head]=Pr[tail]=1/2 • For a coin toss, we could encode head by 1, and tail by 0 => i.e. 1 bit of information
Entropy (2) • Ex. Random variable X with Pr[x1]=1/2, Pr[x2]=1/4, Pr[x3]=1/4 • The most efficient encoding is to encode x1 as 0, x2 as 10, x3 as 11. uncertainty information codeword length Pr[x1]=1/2 Pr[x2]=1/4
Entropy (3) • Notice: probability 2-n => n bits p => -log2 p • Ex.(cont.) The average number of bits to encode X
Entropy: definition • Suppose X is a discrete random variable which takes on values from a finite set X. Then, the entropy of the random variable X is defined as
Entropy : example • Let P={a, b}, Pr[a]=1/4, Pr[b]=3/4. K={K1, K2, K3}, Pr[K1]=1/2, Pr[K2]=Pr[K3]= 1/4. encryption matrix: H(P)= H(K)=1.5, H(C)=1.85
Properties of entropy (1) • Def: A real-valued function f is a strictly concave (凹) function on an interval I if f(y) f(x) y x
Properties of entropy (2) • Jensen’s inequality: Suppose f is a continuous strictly concave function on I, Then Equality hold iff x1 =...=xn xn x1
Properties of entropy (3) • Theorem:X is a random variable having a probability distribution which takes on the values on p1, p2,…pn, pi>0, 1 i n. Then H(X) log2 n with equality iff pi=1/n for all i * Uniform random variable has the maximum entropy
Properties of entropy (4) • Proof:
Entropy of a natural language (1) • HL : average information per letter in English 1. If the 26 alphabets are uniform random, = log2 26 4.70 2. Consider alphabet frequency H(P) 4.19
Entropy of a natural language (2) 3. However, successive letters has correlations Ex. Digram, trigram Q: entropy of two or more random variables?
Properties of entropy (5) • Def: • Theorem: H(X,Y) H(X)+H(Y) with equality iff X and Y are independent • Proof: Let
Entropy of a natural language (3) 3. Let Pn be the random variable that has as its probability distribution that of all n-gram of plaintext. tabulation of digrams => H(P2)/2 3.90 tabulation of trigrams => H(P3)/3 … tabulation of n-grams => H(Pn)/4 1.0 HL 1.5
Entropy of a natural language (4) • Redundancy of L is defined as Take HL =1.25, RL = 0.75 English language is 75% redundant ! • We can compress English text to about one quarter of its original length
Conditional entropy • Known any fixed value y on Y, information about random variable X • Conditional entropy: the average amount of information about X that is revealed by Y • Theorem: H(X,Y)=H(Y)+H(X|Y)
Theorem about H(K|C) (1) • Let (P,C,K,E,D) be a cryptosystem, then H(K|C) = H(K) + H(P) – H(C) • Proof: H(K,P,C) = H(C|K,P) + H(K,P) Since key and plaintext uniquely determine the ciphertext H(C|K,P) = 0 H(K,P,C) = H(K,P) = H(K) + H(P) Key and plaintext are independent
Theorem about H(K|C) (2) • We have • Similarly, • Now, H(K,P,C) = H(K,P) = H(K) + H(P) H(K,P,C) = H(K,C) = H(K) + H(C) H(K|C)= H(K,C)-H(C) = H(K,P,C)-H(C) = H(K)+H(P)-H(C)
Results (1) • Define random variables as Ciphertext Plaintext Cn Pn K => Set |P|=|C|,
Spurious(假) keys (1) • Ex. Oscar obtains ciphertext WNAJW, which is encrypted using a shift cipher • K=5, plaintext river • K=22, plaintext arena • One is the correct key, and the other is spurious • Goal: prove a bound on the expected number of spurious keys
Spurious keys (2) • GivenyCn , the set of possible keys • The number of spurious keys |K(y)|-1 • The average number of spurious keys Ciphertext Plaintext Cn Pn K
Relate H(K|Cn) to spurious keys (1) • By definition
Relate H(K|Cn) to spurious keys (2) We have derived So
Relate H(K|Cn) to spurious keys (3) • Theorem: |C|=|P| and keys are chosen equiprobably. The expected number of spurious keys • As n increases, right hand term => 0
Relate H(K|Cn) to spurious keys (4) • Set • For substitution cipher, |P|=|C|=26, |K|=26! Unicity distance The average amount of ciphertext required for an opponent to be able to unique compute the key, given enough time
Product cryptosystem • S1 = (P,P,K1,E1,D1), S2 = (P,P,K2,E2,D2) • The product of two cryptosystems is S1 = (P,P, K1K2,E,D) Encryption: Decryption:
Product cryptosystem (cont.) • Two cryptosystem M and S commute if • Idempotent cryptosystem: S2 = S • Ex. Shift cipher • If a cryptosystem is not idempotent, then there is a potential increase in security by iterating it several times MxS = SxM
How to find non-idempotent cryptosystem? • Thm: If S and M are both idempotent, and they commute, then SM will also be idempotent • Idea: find simple S and M such that they do not commute • SxM is possibly non-idempotent (SXM) x (SxM) = S x (M x S) xM =S x (S x M) x M =(S x S) x (M x M) =S x M