400 likes | 573 Views
Ming Li University of Waterloo. Information Distance from a Question to an Answer. An example of chain letters. Charles H. Bennett collected 33 copies 1980--1997. He wanted to know their evolutionary history. They have reached billions of people. They are about 2000 characters and mutate;
E N D
Ming Li University of Waterloo Information Distance from a Question to an Answer
An example of chain letters • Charles H. Bennett collected 33 copies 1980--1997. He wanted to know their evolutionary history. • They have reached billions of people. • They are about 2000 characters and mutate; • Traditional phylogeny methods fail: • Can’t do multiple alignment due to translocations • No models of evolution • They are not alone: programs, music scores, genomes ...
A sample letter:
A very pale letter reveals evolutionary path: ((copy)*mutate)*
An example of Question and Answer • How do we measure information distance between two concepts? • Or between a question and an answer?
Information Distance Bennett, Gacs, Li, Vitanyi, Zurek, STOC’93 Li et al: Bioinformatics, 17:2(2001), 149-154, Li et al, IEEE Trans. Info. Theory, 2004 • In classical Newton world, we use length to measure distance: 10 miles, 2 km • In the modern information world, what measure do we use to measure the distances between • Two documents? • Two genomes? • Two computer virus? • Two junk emails? • Two (possibly plagiarized) programs? • Two pictures? • Two internet homepages? • A question and an answer?
A general theory must satisfy: • Application independent • Dominate all other theories • Useful in practice.
Outline • A theory of information distance • How to approximate information distance • Applications, including a question and answer system
The classical approaches do not work • For all the distances we know: Euclidean distance, Hamming distance, edit distance, none is proper. For example, they do not reflect our intuition on: • But from where shall we start? • We will start from first principles of physics and make no more assumptions. We wish to derive a general theory of information distance. Austria Byelorussia
Thermodynamics of Computing Heat Dissipation Input Output Compute • Physical Law: 1kT is needed to irreversibly process 1 bit (Von Neumann, Landauer) • Reversible computation is free. Output Input A AND B A 1 0 0 0 1 1 1 0 1 1 0 0 1 1 A billiard ball computer B AND NOT A A AND NOT B B A AND B
Ultimate thermodynamics cost of erasing x: • “Reversibly compress” x to x*, the shortest description of x • Then erase |x*| bits. • The longer you compute, the less heat dissipation. • Cost of computing x from y, define: E(x,y) = min { |p| : U(x,p) = y, U(y,p)=x }. Fundamental Theorem: E(x,y) = max{ K(x|y), K(y|x) } Bennett, Gacs, Li, Vitanyi, Zurek STOC’93
Kolmogorov complexity • K(x)= length of the shortest program that outputs x • K(x|y)=length of shortest program for x given y. • Examples: • 1n has short program: for i=1 to n, print “1”. • A “random” x has long program: print “x”.
E(x,y) ≤ max{C(x|y),C(y|x)}. Proof. Define graph G={XUY, E}, and let k1=C(x|y), k2=C(y|x), assuming k1≤k2 • where X={0,1}*x{0} • and Y={0,1}*x{1} • E={{u,v}: u in X, v in Y, C(u|v)≤k1, C(v|u)≤k2} X: ● ● ● ● ● ● … Y: ○ ○ ○ ○ ○ ○ … • We can partition E into at most 2^{k2+2} matchings. • For each (u,v) in E, node u has most 2^{k2+1} edges hence belonging to at most 2^{k2+1} matchings, similarly node v belongs to at most 2^{k1+2} matchings. Thus, edge (u,v) can be put in an unused matching. • Program P: has k2,i, where Mi contains edge (x,y) • Generate Mi (by enumeration) • From Mi,x y, from Mi,y x. QED degree≤2^{k2+1} M2 M1 degree≤2^{k1+1}
Normalized Information distance: max{K(x|y),K(y|x)} d(x,y) = ------------------------ max{K(x),K(y)} First proposed in Li et al: Bioinformatics, 17:2(2001), 149-154, in slightly different form. In this form in: Li et al, IEEE Trans Info. Theory, 2004
d(x,y) Properties • Theorem. d(x,y) is a nontrivial distance. It is symmetric, satisfies triangle inequality, etc. • Theorem. d(x,y) is universal: if x and y are “close” in any sense, then they are “close” under d(x,y). That is, for any reasonable computable distance D, there exists constant c, for all x,y, d(x,y) ≤ D(x,y) + c
For any computable D, for all x,y: d(x,y) ≤ D(x,y)+ c Proof Ideas: Naively, by density assumption |{y: |y|=n and D(x,y) ≤ d }| ≤ 2dn, we have K(x|y), K(y|x) ≤ nD(x,y). So max{K(x|y), K(y|x) nD(x,y) d(x,y) = ------------------------ ≤ -------------------- (1) max{K(x),K(y) max{K(x),K(y) Then we are stuck. This will work only if K(x) or K(y)=n. To solve this, we first prove, Lemma: There exist shortest programs x* for x, and y* for y, such that: K(x|y) ≤ K(x*|y*). Now, max{K(x|y), K(y|x)} max{K(x*|y*),K(y*|x*)} max{|x*|,|y*|}D(x,y) d(x,y) = ------------------------ ≤ ----------------------------- ≤ --------------------------- ≤ D(x,y) +c max{ K(x),K(y) } max{K(x),K(y) } max{|x*|,|y*|}
Application 1: Reconstructing History of Chain Letters • For each pair of chain letters (x,y) we computed d(x,y) by GenCompress, hence a distance matrix. • Using standard phylogeny program to construct their evolutionary history based on the d(x,y) distance matrix. • The resulting tree is a perfect phylogeny: distinct features are all grouped together. Bennett, M. Li and B. Ma, Chain letters and evolutionary histories. Scientific American, 288:6(June 2003) (feature article), 76-81.
with love all things are possible this paper has been sent to you for good luck. the original is in new england. it has been around the world nine times. the luck has been sent to you. you will receive good luck within four days of receiving this letter. provided, in turn, you send it on. this is no joke. you will receive good luck in the mail. send no money. send copies to people you think need good luck. do not send money as faith has no price. do not keep this letter. It must leave your hands within 96 hours. an r.a.f. (royal air force) officer received $470,000. joe elliot received $40,000 and lost them because he broke the chain. while in the philippines, george welch lost his wife 51 days after he received the letter. however before her death he received $7,755,000. please, send twenty copies and see what happens in four days. the chain comes from venezuela and was written by saul anthony de grou, a missionary from south america. since this letter must tour the world, you must make twenty copies and send them to friends and associates. after a few days you will get a surprise. this is true even if you are not superstitious. do note the following: constantine dias received the chain in 1953. he asked his secretary to make twenty copies and send them. a few days later, he won a lottery of two million dollars. carlo daddit, an office employee, received the letter and forgot it had to leave his hands within 96 hours. he lost his job. later, after finding the letter again, he mailed twenty copies; a few days later he got a better job. dalan fairchild received the letter, and not believing, threw the letter away, nine days later he died. in 1987, the letter was received by a young woman in california, it was very faded and barely readable. she promised herself she would retype the letter and send it on, but she put it aside to do it later. she was plagued with various problems including expensive car repairs, the letter did not leave her hands in 96 hours. she finally typed the letter as promised and got a new car. remember, send no money. do not ignore this. it works. st. jude A typical chain letter input file:
Phylogeny of 33 Chain Letters Confirmed by VanArsdale’s study, answers an open question
Application 2: Evolution of SpeciesLi et al: Bioinformatics, 17:2(2001), • Traditional methods: for a single gene • Max. likelihood: multiple alignment, assumes statistical evolutionary models, computes the most likely tree. • Max. parsimony: multiple alignment, then finds the best tree, minimizing cost. • Distance-based methods: multiple alignment, NJ; Quartet methods, Fitch-Margoliash method. • Problem: different gene trees, manual alignment, horizontally transferred genes, do not handle genome level events.
Whole Genome Phylogeny • Many complete genomes sequenced (400 eukaryote projects). • No evolutionary models • Multiple alignment not possible • Single-gene trees often give conflicting results. • Snel, Bork, Huynen: compare gene contents. Boore, Brown: gene order. Sankoff, Pevzner, Kececioglu: reversal/translocation. • All above are either too simplistic or NP-hard and need approximation anyways. • Our method using shared information is robust. • Uses all the information in the genome. • No need of evolutionary model – universal. • No need of alignment • Special cases: gene contents, gene order, reversal/translocation
Eutherian Orders: • It has been a disputed issue which of the two groups of placental mammals are closer: Primates, Ferungulates, Rodents. • In mtDNA, 6 proteins say primates closer to ferungulates; 6 proteins say primates closer to rodents. • Hasegawa’s group concatenated 12 mtDNA proteins from: rat, house mouse, grey seal, harbor seal, cat, white rhino, horse, finback whale, blue whale, cow, gibbon, gorilla, human, chimpanzee, pygmy chimpanzee, orangutan, sumatran orangutan, with opossum, wallaroo, platypus as out group, 1998, using max likelihood method in MOLPHY.
Eutherian Orders ... • We use complete mtDNA genome of exactly the same species. • We computed d(x,y) for each pair of species, and used Neighbor Joining in MOLPHY package (and our own Hypercleaning software). • We constructed exactly the same tree. Confirming Primates and Ferungulates are closer than Rodents.
Evolutionary Tree of Mammals: Li et al: Bioinformatics, 17:2(2001)
Applicatoin 3. Parameter-free data mining. Perils of parameter-laden data mining algorithms: • Incorrect settings miss true patterns • Too much tuning leads to over fitting --- excellent performance on one dataset, fails badly on new but similar datasets. • Parameters impose presumptions on data
Keogh, Lonardi, Ratanamahatana, KDD’04 • Time series clustering • Compared against 51 different parameter-laden measures from SIGKDD, SIGMOD, ICDM, ICDE, SSDB, VLDB, PKDD, PAKDD, the simple parameter-free shared information method outperformed all --- including HMM, dynamic time warping, etc.
Applications 4: “Uncheatable” Plagiarism Test X. Chen, B. Francia, M. Li, B. Mckinnon, A. Seker. IEEE Trans. Information Theory, 50:7(July 2004), 1545-1550. • The shared information measure also works for checking student program assignments. We have implemented the system SID. • Our system would take input on the web, strip user comments, unify variables, we openly advertise our methods (unlike other programs) that we check shared information between each pair. It is uncheatable because it is universal. • Available at http://genome.math.uwaterloo.ca/SID
Application 5: A language tree created using UN’s The Universal Declaration Of Human Rights, by three Italian physicists, in Phy. Rev. Lett., & New Scientist
Application 6: Classifying Music • By Rudi Cilibrasi, Paul Vitanyi, Ronald de Wolf, reported in New Scientist, April 2003. • They took 12 Jazz, 12 classical, 12 rock music scores. Classified well. • Potential application in identifying authorship. • The technique's elegance lies in the fact that it is tone deaf. Rather than looking for features such as common rhythms or harmonies, says Vitanyi, "it simply compresses the files obliviously."
Other applications • C. Ane and M.J. Sanderson: Phylogenetic reconstruction • K. Emanuel, S. Ravela, E. Vivant, C. Risi: Hurricane risk assessment • Protein sequence classification • Fetal heart rate detection • Ortholog detection • Authorship, topic, domain identification • Worms and network traffic analysis
Question & Answer • Practical concerns: • Partial match only, often do not satisfy triangle inequality. For example, x is very popular, and y is not, then K(x|y) overwhelms K(y|x). • Nothing to compress: a question and an answer. • Neighborhood density, there are answers that are much more popular than others.
The Min Theory • E_min (x,y) = smallest program p needed to convert between x and y, but keeping irrelevant information away from p. • Fundamental Theorem II: E_min (x,y) = min {K(x|y), K(y|x) }. All other development similar to E(x,y). Define d_min (x,y) = min {K(x|y), K(y|x) } / min {K(x),K(y)}.
How to approximate d(x,y)(max and min versions) • Each term K(x|y) may be approximated by one of the following: • (a) Compression. • (b) Shannon-Fano code (Cilibrasi, Vitanyi): an object with probability p may be encoded by –logp + 1 bits. • (c) Mixed usage of (a) and (b) – in question and answer application.
Query-Answer System X. Zhang, Y. Hao, X. Zhu, M. Li • Adding conditions to normalized information distance, we built a Query-Answer system. • The information distance naturally measures • Good pattern matches – via compression • Frequently occurring items – via Shannon-Fano code • Mixed usage of the above two.
Summary • A robust method that works when there is no clear data model: English text, music, genome. • A quick, primitive, and dirty way that (almost) always works, when other methods don’t. • A solid theory behind.
Collaborators & Credits: • Chain letters: C. Bennett, B. Ma • GenCompress: X. Chen, S. Kwong • DNACompress: X. Chen, B. Ma, J. Tromp • Tree programs: Jiang, Kearney, Zhang • Biological experiments: J. Badger • Plagiarism, SID: X. Chen, B. McKinnon, A. Seker • Question and Answer: X. Zhang, Y. Hao, X. Zhu