220 likes | 382 Views
Without mathematics we cannot penetrate deeply into philosophy. Without philosophy we cannot penetrate deeply into mathematics. Without both we cannot penetrate deeply into anything. --- Leibniz. Understanding Algorithmic Information Theory Through Gregory Chaitin's Perspective.
E N D
Without mathematics we cannot penetrate deeply into philosophy. Without philosophy we cannot penetrate deeply into mathematics. Without both we cannot penetrate deeply into anything. --- Leibniz
Understanding Algorithmic Information Theory Through Gregory Chaitin's Perspective Miguel Aguilar CS 4890 Final Project Spring 2013
Some of the most important work of Gregory Chaitin will be explored. His work will be used to show how we can redefine both Information Theory and Algorithmic Information Theory. Gregory Chaitin's work will include: Chaitin's Constant, Chaitin's Algorithm, and Chaitin's Incompleteness Theorem.
Who is Gregory Chaitin? • Gregory Chaitin is a mathematician and computer scientist. • The main focus of his work has been focused to advance our understanding of randomness. • Chaitin has significantly extended the work of both Gödel and Turing.
Another perspective on Chaitin's Work • Gregory Chaitin is more of a philosopher than a mathematician or a programmer. • He aims for a type of metaphysics commitment that is rarely seen in the field of pure mathematics. • He wrote and published a book with a bundled copy of proof coded in LISP.
Defining Chaitin's Halting Probability: • Let's consider the programs on a universal Turing machine U that requires no input. • We define P as the set of all programs on U that halt. • We let |p| be the length of the bit string encoding program p (any element of P)
A Simple Explanation • Instead of looking at one program, we put all possible programs in a box and blindly pick a program. • What is the probability that a program will halt? This probability can be expressed as a real definable number. • Chaitin chooses to express this number in an infinite binary notation. A base-2 arithmetic expansion original to information theory. • This is also relative to Borel's paradox.
An infinite number between 0 and 1, where each independent bit is an independent fair toss of a coin at each stage: either 1 or 0. • The result would be a real number. Whatever this real number is we can say that it has a definite value. • This real number is in binary notation. This is defined as Chaitin's Constant Ω.
Facts about Ω • Ω is not only irrational but is an incomputable real number. • No algorithm exists that will compute its digits. • Knowing all the even bits of Ω wouldn't help us to get any of the odd bits of Ω! • Knowing the first million bits of Ω wouldn't help us to get the million and first bit of Ω!
Chaitin's Algorithm Graph-Coloring Register Allocation • A variable is live at a point in the program if it has been assigned a value that may be used later. • There is one node in the graph for each variable. • An edge between two nodes if the variables corresponding to those nodes are live at the same time at the same point.
The complexity of the Chaitin-Briggs algorithm is O(n2) because of the problem of spilling. Spill is invoked if Simplify fails to remove all nodes in the graph • When a graph does not have any edges, the cost of a single iteration is O(n)
Chaitin's Incompleteness Theorem There exists a constant L (which only depends on the particular axiomatic system and the choice of description language) such that there does not exist a string s for which the statement:
Surprise Test Paradox • A teacher tells a group of students that they will have a test on one weekday in the following week and that test will be a surprise to the students. • The "surprise test" can't be on Friday, the last day. • So it would not be a surprise if it happens on Friday. • The test CANNOT happen on Friday. The students' conclusion is that the test will NEVER happen. The next week, on Monday the teacher has a surprise test for the students. Everything the teacher said came true.
The Paradox and Chaitin's Theorem • Chaitin's theorem says there's some length L such that any particular string of bits needs a program longer than L to print it out. At least, this is so if math is consistent. If it's not consistent, you can’t prove anything! • On the other hand, there's some finite number of programs of length L. So if you take a list of more numbers than that, say 1, 2, ... , N.
Assume there is just one: Then we can go through all programs of length L, find those that print all the other numbers on our list and thus, by a process of elimination, find the program we are looking for. • But that means we've proved that this is a number that can only be computed by a program of length > L. But Chaitin's theorem says that's impossible! At least, not if math is consistent.
So there can't be just one. At least, not if math is consistent. • We can keep playing this game until we rule out all the possibilities, all the way up to N, and then we are really stuck. We get a contradiction. At least, if math is consistent. Uncertainty: So if we could prove math is consistent, we'd know it's not!
A Brief Conclusion: • Ω is a real number with maximal information content. Each bit of Ω is a complete surprise! • Ω is maximally unknowable! Maximum entropy, maximum disorder! • We could measure the progress of mathematics by how many bits of Ω we can currently determine!
In the end it seems like: We need both philosophy and mathematics to understand every complex problem. We need mathematics and philosophy to understand our world even more. In the end it seems as if Leibniz was correct:
Without mathematics we cannot penetrate deeply into philosophy. Without philosophy we cannot penetrate deeply into mathematics. Without both we cannot penetrate deeply into anything. --- Leibniz