290 likes | 308 Views
Giorgi Japaridze. Strong alternatives to weak arithmetics. Wormshop Moscow, 2017. 1. Computability logic (CoL): formal theory of computability in the same sense as classical logic is a formal theory of truth.
E N D
Giorgi Japaridze Strong alternatives to weak arithmetics Wormshop Moscow, 2017
1 Computability logic(CoL): formal theory of computability in the same sense as classical logic is a formal theory of truth. • Central semantical concept: truth • Formulas represent statements • Provides a systematic answer to: • 1) Is P (always) true? • 2) Does the truth of P (always) follow from the truth of Q? Classical logic
1 Computability logic(CoL): formal theory of computability in the same sense as classical logic is a formal theory of truth. computability • Central semantical concept: truth • Formulas represent statements • Provides a systematic answer to: • 1) Is P (always) true ? • 2) Does the truth of P (always) follow from the truth of Q ? computational problems Classical logic Computability computable computability computability 3) How to (always) compute P ? 4) How to (always) construct an algorithm for P from an algorithm for Q ?
2 Classical statements --- special cases of computational problems Classical truth --- a special case of computability Classical logic --- a conservative fragment of computability logic Computability logic Classical logic “Intuitionistic logic” “Linear logic”
3 T Computational problem = Game between Machine and Environment Computability = Winnability Machine Formal definitions omitted (but they sure do exist!).
4 a b c a d a a a e b a f f a c b k . . . There are no procedural rules whatsoever. That is, there are no regulations governing the order in which the players can or should move!
5 Traditional computational problems as games Computational problems in the traditional sense are nothing but functions (to be computed). Such problems can be seen as the following types of depth-2 games: Input 0 1 2 ... Output ... ... ... 0 1 2 3 0 1 2 3 0 1 2 3 • Why is the root green? • Why are the 2nd-level nodes red? • Why does each group of 3rd-level nodes have exactly one green node? • What particular function is this game about? It corresponds to the situation where there was no input. The machine has nothing to answer for, so it wins. They correspond to situations where there was an input but no output was generated by the machine. So the machine loses. Because a function has exactly one (“correct”) value for each argument. The successor function: f(n)=n+1.
6 2+2=4, Snow is white, T 2+2=5, Snow is black, Classical propositions (predicates) = elementary games Classical logic = the elementary fragment of computability logic Elementary game - game without any moves
7 Choice conjunction 0 1 Choice disjunction 0 1 A0 A1 A0 A1 A0 A1 A0 A1 Choice universal quantifier xA(x) = A(0) A(1) A(2) ... xA(x) = A(0)A(1)A(2)... Choice existential quantifier
8 m n n=f(m) m p(m) p(m) 0 1 p(m) p(m) xy(y=f(x)) The problem of computing function f: y(y=f(m)) x( p(x) p(x)) The problem of deciding predicate p:
9 G G 1 2 1 2 1 2 1 2 1 2 1 2 Chess Negation --- the role switch operation Chess=
10 G G 1 2 1 2 1 2 1 2 1 2 1 2 Chess Negation --- the role switch operation Chess= Chess
10 Negation --- the role switch operation G G 1 2 1 2 1 2 1 2 1 2 1 2 Chess Chess= Chess
10 Negation --- the role switch operation G G 1 2 1 2 1 2 1 2 1 2 1 2 Chess Chess= Chess
10 Negation --- the role switch operation G G 1 2 1 2 1 2 1 2 1 2 1 2 Chess Chess= Chess
11 conjunction both of the disjunction one of the To win a parallel need to win in components ENVIRONMENT Peter Paul YOU Parallel conjunction, parallel disjunction AB, AB --- parallel play of A and B.
12 √
12 √
12 √
12 √
12 √
13 Roles interchanged in A, turning it into a computationalresource that cab be used in solving B Reduction AB=df A B • Halts(x,y) =“Turing machine x halts on input y” • Accepts(x,y) = “Turing machine x accepts input y” Halting problem Acceptance problem x y(Halts(x,y) Halts(x,y)) x y(Accepts(x,y) Accepts(x,y)) Reduction of the acceptance problem to the halting problem
14 Blind quantifiers: , xA(x) = x A(x) xA(x): no value for x is specified; machine should play “blindly” in a way that guarantees success in A(x) for every possible value of x. x (Even(x) Odd(x) y (Even(x+y) Odd(x+y)))
15 Clarithmetics are axiomatic number theories based on CoL in the same sense as PA is based on classical logic. Language: , , , , , , , , , ; 0, ’, +, , =(x’ means x+1) Systems CLA4, CLA5, CLA6 and CLA7 of clarithmetic share the same set of (nonlogical) axioms: 1.x (0=x’) 2.xy (x’=y’ x=y) 3.x (x+0=x) 4.xy (x+y’=(x+y)’) 5.x (x0=0) 6.xy (xy’=(xy)+x) 7. -closure of F(0) x (F(x) F(x’)) xF(x) for each elementary (not containing , , , ) formula F. 8. x y (y=x’) Peano axioms Extra-Peano axiom
16 On top of the axioms (previous slide), CLA4-CLA7 have a single (nonlogical) rule of induction. This is the only point where the four systems differ. F(0) F(x)F(2x)F(x)F(2x+1) CLA4-Induction where F is polynomially bounded F(x) CLA5-Induction where F is polynomially bounded F(0) F(x)F(x+1) CLA6-Induction where F is exponentially bounded F(x) CLA7-Induction no conditions on F
17 The standard concepts of time and space complexities are conservatively (and naturally) generalized to the interactive level. Theorem. Under the standard arithmetical interpretation: 1. (Soundness:) Every theorem F of CLA4 represents an arithmetical problem with a polynomial time solution. Such a solution can be automatically extracted from a proof. 2. (Extensional completeness:) Every arithmetical problem with a polynomial time solution is represented by some theorem F of CLA4. 3. (Intentional completeness:) Every formula representing an arithmetical problem with a polynomial time solution is a theorem of CLA4+TA. 4. Similarly for CLA5, but with “polynomial space” instead of “polynomial time”. 5. Similarly for CLA6, but with “elementary recursive time (=space)”. 6. Similarly for CLA7, but with “primitive recursive time (=space)”.
18 The standard concepts of time and space complexities are conservatively (and naturally) generalized to the interactive level. A new complexity measure --- amplitude complexity --- is introduced. It is concerned with the size’s of Machine’s moves relative to the sizes of Environment’s moves. The same kind of a soundness and completeness theorem has been proven for the “tunable” version of clarithmetic, CLA11A,S,T, with just four simple extra-Peano axioms and two rules (induction and comprehension). Here A,S,T are sets of (pseudo)terms used as bounds in those rules, and govern the amplitude, space and time complexities of the target “tricomplexity” class, respectively. These parameters can be tuned in a mechanical, brute force, “canonical” way to obtain a sound and complete instance of CLA11A,S,T with respect to the corresponding tricomplexity. For instance, for linear amplitude + polylogarithmic space + polynomial time, we can choose A={all terms built from x using 0,’,+}; S={all terms built from |x| using 0,’,+,}; T={all terms built from x using 0,’,+,}. All reasonable tricomplexity classes can be captured in a similar way.
19 The following formulas are provable in CLA4. What does this say in view of the soundness of CLA4 w.r.t. polynomial time? x y (x=y xy) “=” is polynomial time decidable x y z (z=x+y) “+” is polynomial time computable x(y(x=yy) y(x=yy)) “square root”, when exists, is polynomial time computable x y (p(x) q(y)) p is polynomial time reducible to q Clarithmetics can be seen as declarative programming languages in an extreme sense: programming = proof-search, or even (after developing reasonable theorem-provers) programming = writing a single (goal) formula. Every proof/program is its own verification, and every formula/line is its own best possible comment.
Advantages over weak (bounded) arithmetics 20 1. Clarithmetics achieve intensional completeness while weak arithmetics can only achieve extensional completeness. Separating theories (and hence the associated complexity classes) intensionally is easier than extensionally! 2. The intensional strength of clarithmetics makes them more adequate as potential problem-solving tools (clarithmetic = programming language) for, in such applications, it is of course intensional rather than extensional strength that matters! When a clarithmetic is used as a programming language, it suffices to state the target (say, the function to be computed) in an ad hoc manner, and then look for a proof of the target (or ask a theorem-prover to do this). Weak arithmetics would require to do a lot of pre-processing, essentially amounting to already finding a solution of the problem before it is even stated. 3. Clarithmetics extend rather than restrict PA. Weak arithmetics “tamper with PA” and “throw out the baby with the bath water”. Preserving PA allows us to safely rely on our standard arithmetical intuitions when reasoning within clarithmetic. Reasoning in weak arithmetics is hard, as we need to pretend that we do not know certain things that we actually do know! 4. Clarithmetic tends to be flexible and scalable. In the other approaches one typically sees separate and ad hoc results/publications devoted to just some of the infinite variety of complexity classes systematically captured by CLA11A,S,T. Also, the underlying “elementary part” (PA in our case) of clarithmetics can be varied without any concerns about losing computational adequacy. In weak arithmetics, adding new axioms immediately yields unsound theories. 5. Clarithmetics are computationally meaningfulin the full generality of their language, and easy-to-understand in their own rights. 6. Clarithmetics are more general as they take things to the interactive level. The other approaches are merely about functions. 7. Clarithmetics tend to be simpleand elegant. For comparison, Buss’s systems for polynomial time and space have to introduce a bunch of new function symbols and 30+ axioms, and even go to the second-order level. The same applies to similar later approaches to logspace etc. The latter often achieve completeness by adding axioms that arithmetize some nontrivial theory of computation; such axioms are huge formulas and lack any direct arithmetical meaning. They are about graphs, computations, etc. rather than about numbers (so, no wonder they yield completeness). For this very reason, they apparently fail to provide any new tools/insights for solving open problems of complexity theory. What is the point in simply translating complexity theory into arithmetic?! An extensive survey of the subject at: www.csc.villanova.edu/~japaridz/CL/