360 likes | 484 Views
Gradational Accuracy and non-classical logic:. a graphical guide. J. R. G. Williams, University of Leeds. Worlds and probability space.
E N D
Gradational Accuracy and non-classical logic: a graphical guide J. R. G. Williams, University of Leeds
Worlds and probability space NB: Until the generalized to non-classical probabilitiesall the material here is an informal report of ideasand proofs from Joyce “A non-pragmatic vindicationof probabilism”, Philosophy of Science (1998).
Possible worlds correspond in a one-one way with “truth value assignments”. (I’ll use the black/green circles for functions from each proposition to a number in [0,1])
We can find the truth-value assignments here! (extremalprobability functions)
What is a (classical) probability? • Think of dividing total credence (=1) amongst a number of complete, consistent, possible situations. • Probabilities map one-one onto assignments of credences to worlds. • The probability of a proposition P= the sum of the credence assigned to worlds where P is true. Interesting fact to remember: • Probabilities are “weighted averages” of truth value assignments. • Take a “weighted average” of two probabilities; you get another probability • …because weighted averages of weighted averages are weighted averages! The space of (classical) probabilities is the “closure under mixing of (classical) truth value assignments”
Distances in belief space We have a measure of “how accurate an arbitrary belief state is, given the way the world is”. This is the accuracy score. For our purposes, we will need to work with a more general notion: “how far apart two arbitrary belief states are”.
Score=m Score=n
C=W+(B-A) B A W
C B Score=k A Distance(A,B):=Score(W,C)=Score(W,W+(A-B))=k
G Let G be the closest probability function to B
The reductio argument for domination We’ve given a definition of the candidate probability function that is supposed to “accuracy dominate” B. G=The closest probability function to B (we can prove this is unique). Can we show it accuracy dominates B, or are we being fooled by pictures? Required to prove: for arbitrary W, GW is less than BW.
Suppose for reductio that G is further from W than is B (GW>BW) W B G
Anything “on the line” between two probabilityfunctions, WG, is a probability function. (Probabilities closedunder “mixing”). W B G
Geometrically, since WG is longer than WB,we can see there’s a point on the line WGwhich is closer to B than is G. (Main task of rigorous proof: check this to be so, givenonly the structure induced by axioms for score). W B G
Putting this together, we find a probability function G* closer to B than G is. This contradicts the construction of G. W G* B G Reductio! So WG must be shorter than WB. I.e. G is closer to W than B is.
What our proof depends on: • The fact we can think of worlds as truth value assignments • … and hence as (extreme) probabilities. • The fact that accuracy scores induce a distance among arbitrary probabilities. • The fact that any mixture (weighted average) of two probability functions is a probability function. • The fact that this “distance” behaves geometrically as you’d expect.
We now allow “non-classical” possible worlds (in addition to classical ones).
We can represent them as truth-value assignments, once again.
“Weighted averages” of classical worlds gave classical probabilities
Weighted averages of all worlds give new functions: “non-classical probabilities”.
G* B* But accuracy domination argument relied only on closure property, not anything to do with classicality.
Any set “closed under averages” will be such that “accuracy domination” holds. “Generalized probabilities” are the minimal set(i) containing each (generalized) world (ii) closed under weighted averages G* B*
We have a characterization of generalized probabilities, relative to a generalized notion of truth-value assignment. • We can prove an accuracy-domination theorem for these, just as for classical probabilities/classical truth-value assignments. • Can we find an illuminating axiomatization? • I know how to generalize the classical probability axioms, such that any weighted average as above must satisfy these axioms.
Generalized logic AB iff on no truth value assignment, |A|>|B|. Axioms for generalized probability (mostly): • If B then P(B)=1 • If A then P(A)=0 • If AB then not: P(A)>P(B) • P(A)+P(B)=P(A&B)+P(AvB) In fact, (4) depends on our truth value assignments satisfying: |A|+|B|=|A&B|+|AvB| Where e.g. we have inequality on TVs, we get inequality version of (4).
Applications Take a logic characterized via designated value (e.g. Kleene, LP, supval,int). • If a model assigns X a designated value, its truth value is 1 (“true”) • If a model assigns X a non-designated value, its truth value is 0 (“untrue”) The probability theory axiomatized via this logic is related via accuracy domination to such truth-value assignments Take a logic with characterized via linear [0,1] “no drop in truth value” (Lukasiewicz,DegLog) • Let truth value assignments simply be the values assigned by models. The probability theory axiomatized via this logic is related via accuracy domination to such truth-value assignments