50 likes | 150 Views
Thomas Swift. Week 4. Directed Information. DI(X N Y N ) = ∑ I(X ( i ) ; Y i | Y ( i – 1 ) ) Cumulative reduction in uncertainty of frame Y i when the past frames Y i-1 of Y are supplemented by information about the past and present frames X i of X.
E N D
Thomas Swift Week 4
Directed Information • DI(XN YN) = ∑ I(X(i) ; Yi | Y(i – 1)) • Cumulative reduction in uncertainty of frame Yi when the past frames Yi-1 of Y are supplemented by information about the past and present frames Xi of X. • I(X ; Y) = H(X) – H(X|Y) = H(Y) – H(Y|X)is the mutual information of X and Y. • I(X ; Y | Z) = H(X|Z) – H(X|Y,Z) = H(X,Z) + H(Y,Z) – H(X,Y,Z) – H(Z) is the conditional mutual information of X and Y given Z. • H(X) = -∑p(x)log(p(x)) is the entropy of X.
Direct Implementation • Assume pre-processing is done on video features; calculate DI for 2 vectors of numbers. • Implement entropy, conditional entropy, joint entropy. • Implement conditional mutual information. • I(X ; Y | Z) = H(X,Z) + H(Y,Z) – H(X,Y,Z) – H(Z) • Extend joint entropy from 2 to 3 variables. • Implement DI from CMI: DI(XN YN) = ∑ I(X(i) ; Yi | Y(i – 1)). • Problem: how to handle varying length vectors? • Alternate formula: DI(XN YN) = ∑(H(Yi) – H(Yi-1) + H(Xi,Yi-1) – H(Xi,Yi)).
Preliminary Tests and Results • X = [1 2 2 2 2 1 0 2 1 0], Y= [1 3 2 2 2 1 0 2 1 0] • DI(X,Y) = 3.942488 (9 matches) • Y’ = [1 3 5 2 2 1 0 2 1 0] (3rd element now differs) • DI(X,Y’) = 3.554743 (8 matches) • X’ = [1 2 6 2 2 1 0 2 1 0] (3rd element differs) • DI(X’,Y) = 3.230232 (8 matches) • A= [2 5 6 3 2 2 1 9 5 3], B= [7 6 7 3 8 9 3 7 2 0] • DI(A,B) = 2.646439 (1 match) • C = [37 41 2 9 11 98 56 23 30 11] • DI(A,C) = 3.121928; why?? (0 matches)
What’s Next? • Refine current direct implementation of DI. • Research/Implement other estimates of DI. • Universal Estimation • Estimation from Neural Spikes Paper