1.62k likes | 1.77k Views
Inductive Amnesia. The Reliability of Iterated Belief Revision. Even Odd Straight Crooked Reliability “Confirmation” Performance “Primitive norms” Correctness “Coherence” Classical statistics Bayesianism Learning theory Belief Revision Theory. A Table of Opposites.
E N D
Inductive Amnesia The Reliability of Iterated Belief Revision
Even Odd Straight Crooked Reliability “Confirmation” Performance “Primitive norms” Correctness “Coherence” Classical statistics Bayesianism Learning theory Belief Revision Theory A Table of Opposites
The Idea • Belief revision is inductive reasoning • A restrictive norm prevents us from finding truths we could have found by other means • Some proposed belief revision methods are restrictive • The restrictiveness is expressed as inductive amnesia
Inductive Amnesia • No restriction on memory... • No restriction on predictive power... • But prediction causes memory loss... • And perfect memory precludes prediction! • Fundamental dilemma
Outline • I. Seven belief revision methods • II. Belief revision as learning • III. Properties of the methods • IV. The Goodman hierarchy • V. Negative results • VI. Positive results • VII. Discussion
Points of Interest • Strong negative and positive results • Short run advice from limiting analysis • 2 is magic for reliable belief revision • Learning as cube rotation • Grue
Part I Iterated Belief Revision
Bayesian (Vanilla) Updating B Propositions are sets of “possible worlds”
Bayesian (Vanilla) Updating E new evidence B
Bayesian (Vanilla) Updating • Perfect memory • No inductive leaps E B B’ B’ = B *E = BÇE
Epistemic Hell E Surprise! B Epistemic hell
Epistemic Hell • Scientific revolutions • Suppositional reasoning • Conditional pragmatics • Decision theory • Game theory • Data bases E B Epistemic hell
Ordinal EntrenchmentSpohn 88 • Epistemic state S maps worlds to ordinals • Belief state of S = b (S ) = S -1(0) • Determines “centrality” of beliefs • Model: orders of infinitesimal probability w + 1 w 2 1 B = b (S) 0 S
Belief Revision Methods * takes an epistemic state and a proposition to an epistemic state S b(S) S’ E * b (S *E )
Spohn Conditioning *CSpohn 88 b (S ) S
E Spohn Conditioning *CSpohn 88 new evidence contradicting b (S ) E b (S ) S
E B’ Spohn Conditioning *CSpohn 88 E *C b (S ) S S *C E
E B’ Spohn Conditioning *CSpohn 88 • Conditions an entire entrenchment ordering • Perfect memory • Inductive leaps • No epistemic hell on consistent sequences • Epistemic hell on inconsistent sequences E *C b (S ) S S *C E
E Lexicographic Updating *LSpohn 88, Nayak 94 S
E B’ Lexicographic Updating *LSpohn 88, Nayak 94 • Lift refuted possibilities above non-refuted possibilities preserving order. • Perfect memory on consistent sequences • Inductive leaps • No epistemic hell *L S S *L E
Minimal or “Natural” Updating *MSpohn 88, Boutilier 93 E B S
Minimal or “Natural” Updating *MSpohn 88, Boutilier 93 • Drop the lowest possibilities consistent with the data to the bottom and raise everything else up one notch • inductive leaps • No epistemic hell • But... E *M S S *ME
Amnesia • What goes up can come down • Belief no longer entails past data E
Amnesia • What goes up can come down • Belief no longer entails past data E’ E *M
Amnesia • What goes up can come down • Belief no longer entails past data E’ E *M *M
E The Flush-to-a Method *F,a Goldszmidt and Pearl 94 a E E S
E The Flush-to-a Method *F,a Goldszmidt and Pearl 94 • Send non-E worlds to a fixed level a and drop E -worlds rigidly to the bottom • Perfect memory on sequentially consistent data ifa is high enough • Inductive leaps • No epistemic hell a E E *F,a S S *F,aE
Ordinal Jeffrey Conditioning *J,a Spohn 88 • Drop E worlds to the bottom. Drop non-E worlds to the bottom and then jack them up to level a • Perfect memory on consistent sequences if a is large enough • No epistemic hell • Reversible • But... *J,a E E E a B B’ S S *J,aE
Empirical Backsliding • Ordinal Jeffrey conditioning can increase the plausibility of a refuted possibility E a
The Ratchet Method *R,a Darwiche and Pearl 97 b + a b E S
The Ratchet Method *R,a Darwiche and Pearl 97 • Like ordinal Jeffrey conditioning except refuted possibilities move up by a from their current positions • Perfect memory if a is large enough • Inductive leaps • No epistemic hell b + a b E *R,a B B’ S S *R,aE
Part II Belief Revision as Learning
Iterated Belief Revision • (S0 * ()) = S0 • (S0 * (E0, ..., En, En+1)) = (S0 * (E0, ..., En, )) * En+1 S0 S0 S1 S2 E0 E1 * b (S0) b (S1) b (S2)
A Very Simple Learning Paradigm outcome sequence mysterious system 0 0 1 0 0 possible infinite trajectories e n e|n
Empirical Propositions • Empirical propositions are sets of possible trajectories • Some special cases: e “fan” k n s [s] = the proposition that shas occurred [k, n] = the proposition that k occurs at stage n {e} = the proposition that the future trajectory is exactly e
Trajectory Identification • (*, S0) identifies eÛ for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e}
Trajectory Identification • (*, S0) identifies eÛ for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e} possible trajectories e
Trajectory Identification • (*, S0) identifies eÛ for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e} b (S 0) e
Trajectory Identification • (*, S0) identifies eÛ for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e} b (S 1)
Trajectory Identification • (*, S0) identifies eÛ for all but finitely many n, b(S0 * ([0, e(0)], ..., [n, e(n)]) = {e} b (S 2)