1 / 30

Representing Epistemic Uncertainty by means of Dialectical Argumentation

Representing Epistemic Uncertainty by means of Dialectical Argumentation. Peter McBurney and Simon Parsons Agent Applications, Research and Technology (Agent ART) Group Department of Computer Science University of Liverpool, Liverpool UK {p.j.mcburney,s.d.parsons}@csc.liv.ac.uk

draco
Download Presentation

Representing Epistemic Uncertainty by means of Dialectical Argumentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Representing Epistemic Uncertainty by means of Dialectical Argumentation Peter McBurney and Simon Parsons Agent Applications, Research and Technology (Agent ART) Group Department of Computer Science University of Liverpool, Liverpool UK {p.j.mcburney,s.d.parsons}@csc.liv.ac.uk Presentation to: Department of Computer Science University of Liverpool 6 February 2001

  2. Nature of the problem • Problem: Assessing the health risks of new chemicals and technologies • Classical decision theory methods require: • Explicit delineation of all outcomes • Quantification of uncertainties and consequences. • But for most domains: • Scientific knowledge often limited (especially at outset) • Experimental evidence ambiguous and conflicting • No agreement on quantification. Representing Uncertainty with Argumentation

  3. Types of evidence for chemical carcinogenicity • Chemical structure comparison • Mutagenic tests on tissue cultures • Animal bioassays • Human epidemiological studies • Explication of biomedical causal pathways. • These different sources of evidence may conflict. • E.g. Formaldehyde. Representing Uncertainty with Argumentation

  4. What is the likelihood and size of impact? What should be done about chemical X ? Risk Assessment for chemical X Are there adverse health effects from exposure to chemical X ? Representing Uncertainty with Argumentation

  5. Argumentation to represent uncertainty • Two meanings of “argument”: • A case for a claim (a tentative proof) • A debate between people about a claim. • Our degree of certainty in a claim depends on the cases for and against it. • The more and stronger cases against , the less certainty. • A consensus in favour of a claim indicates the greatest certainty. • We can therefore represent uncertainty by means of dialectical argumentation. • We also require a mechanism for generating inferences from the dialectical status of a claim. Representing Uncertainty with Argumentation

  6. Philosophical underpinning • We have adopted an explicit philosophy of science: • Pera’s (1994) model of science as a 3-person game: • The Experimenter + Nature + The Scientific Community. • Feyerabend’s (1971) philosophy of science as epistemological anarchism: • There are no absolute standards which distinguish science from non-science • Standards differ by time, by discipline and by context. • We see two principles as necessary for an activity to be called “science”: • All claims are contestable by anyone (in the community) • All claims are defeasible, with reasoning always to the best explanation. Representing Uncertainty with Argumentation

  7. Proposes and Undertakes Experiment Interprets results of experiment Responds to Experiment Pera’s Philosophy of Science Experimenter Nature Scientific Community Representing Uncertainty with Argumentation

  8. To model these, we need: • A theory of rational discourse between reasonable, consenting participants: • Hitchcock’s (1991) principles of rational mutual inquiry • The discourse ethics of Habermas and Alexy (1978). • A model for an argument: • Toulmin’s (1958) argument schema. • A means to formalize complex dialogues: • Walton and Krabbe’s (1995) characterization of different types of dialogues • Formal dialogue-games of Hamblin (1970, 1971) and MacKenzie (1979, 1990). Representing Uncertainty with Argumentation

  9. Hitchcock’s Principles • 18 Principles of rational mutual discourse, for example: • Dialectification: The content and methods of dialogue should be decided by the participants. • Mutuality: no statement becomes a commitment of a participant unless he or she specifically accepts it. • Orderliness: one issue is raised and discussed at a time. • Logical pluralism: both deductive and non-deductive inference is permitted. • Rule-consistency: there should be no situation where the rules prohibit all acts, including the null act. • Realism: the rules must make agreement between participants possible. • Retraceability: participants must be free at all times to supplement, change or withdraw previous tentative commitments. • Role reversability: the rules should permit the responsibility for initiating suggestions to shift between participants. Representing Uncertainty with Argumentation

  10. Alexy’s Discourse Rules • Rules for discourse over moral and ethical questions, for example: • Freedom of assembly • Common language • Freedom of speech • Freedom to challenge claims • Arguments required for claims • Freedom to challenge arguments • Freedom to disagree over modalities • Requirement for clarification and precization • Proportionate defence • No self-contradictions permitted. Representing Uncertainty with Argumentation

  11. Modality probably Data Claim X is a chemical of type T X is carcinogenic to humans Undercut (Pollock) Warrant Most other Type T chemicals are carcinogenic to humans Epidemiological evidence not unambiguous Rebuttal Backing X is not carcinogenic to rats Epidemiological evidence for others Toulmin’s Argument Schema Representing Uncertainty with Argumentation

  12. Walton and Krabbe’s Typology of Dialogues • Information-seeking dialogues • One participant seeks the answer to a question. • Inquiries • All participants collaborate to find the answer to a question. • Persuasions • One participant seeks to persuade other(s) of the truth of a proposition. • Negotiations • Participants seek to divide a scarce resource. • Deliberations • Participants collaborate to decide a course of action in some situation. • Eristic dialogues • Participants quarrel verbally as a substitute for physical fighting. Representing Uncertainty with Argumentation

  13. Risk Assessment for chemical X Are there adverse health effects from exposure to chemical X ? What is the likelihood and size of impact? Scientific Dialogues What should be done about chemical X ? Regulatory Dialogue Representing Uncertainty with Argumentation

  14. Risk Assessment Dialogues • Scientific dialogues: • Does exposure (in a certain way at certain dose levels) to chemical X lead to adverse health effects? If so, what is the likelihood and magnitude of impact? • A mix of: • Inquiries • Persuasion dialogues. • A regulatory dialogue: • What regulatory actions (if any) should be taken regarding chemical X ? • A mix of: • Inquiries • Deliberations • Negotiations • Persuasion dialogues. Representing Uncertainty with Argumentation

  15. Dialogue Games • Games between 2+ players where each “moves” by uttering a locution. • Developed by philosophers to study fallacious reasoning. • Used in: agent dialogues (Parsons & Amgoud), software development (Stathis), modeling legal reasoning (Bench-Capon et al., Prakken). • Rules define circumstances of: • Commencement of the dialogue • Permitted locutions • Combinations of locutions • e.g. cannot assert a proposition and its negation • Commitment • When does a player commit to some claim? • Termination of the dialogue. Representing Uncertainty with Argumentation

  16. The Risk Agora • A formal framework for representing dialogues concerning carcinogenic risk of chemicals. • Represent the arguments for and against a chemical being a carcinogen. • Represent the current state of scientific knowledge, including epistemic uncertainty. • Enable contestation and defence of clains and arguments. • Enable comparison and synthesis of arguments for specific claims. • Enable summary “snapshots” of the debate at any time. • We have fully specified the locutions and rules for a dialogue-game for scientific discourses. Representing Uncertainty with Argumentation

  17. Speaking in the Agora • Participants can: • Propose or assert claims, arguments, grounds, inference-rules, consequences • Modify each with modalities • Question or contest others’ proposals or assertions • Accept others’ proposals or assertions. • Examples of locutions: • propose ( participant 1: (claim, modality) ) • assert ( participant 1: (claim, modality) ) • show_arg ( participant 1: (arg_for_claim, modalities) ) • contest ( participant 2: propose ( participant 1: (claim, modality) ) ) • etc. Representing Uncertainty with Argumentation

  18. Representing uncertainty in the Agora • We represent the degree of uncertainty in a claim by means of its dialectical argument status in the Agora. • We use a dictionary of labels due to Krause, Fox et al. (1998). • We have modified definitions slightly to allow for counter-counter-arguments. • This is an example, and other modality dictionaries could be defined. A claim is: • Open - no arguments presented yet for it or against it. • Supported - at least one grounded argument presented for it . • Plausible - at least one consistent, grounded argument presented for it. • Probable - at least one consistent, grounded argument presented and no rebuttals or undercuts presented. • Accepted - at least one consistent, grounded argument presented for it and any rebuttals or undercuts have been attacked with counter-arguments. Representing Uncertainty with Argumentation

  19. Debating experimental tests of claims • We also permit debate on: • The validity of experiments to test scientific claims. • The results of valid experiments. • An experimental test of a claim is: • Open - no evidence either way. • Invalid test - the scientific experiment is not accepted by the participants as a valid test of the claim • Inconclusive test - the test is accepted as valid, but the results are not accepted as statistically significant support for the claim or against it. • Disconfirming instance - the test is accepted as evidence against the claim. • Confirming instance - the test is accepted as evidence for the claim. Representing Uncertainty with Argumentation

  20. Experimental status of claims • Claims are then assigned labels according to the extent that debate in the Agora accepts experimental evidence for and against them. • A claim is: • Untested • Inconclusive • Refuted • Confirmed. • Experimental evidence in favour of a claim can be presented as an argument for the claim. Representing Uncertainty with Argumentation

  21. Inference from the Agora • We define a claim as (defeasibly) true at time t • if and only ifit is Accepted in the Agora at time t. • Otherwise, it is not (defeasibly) true at time t. • This notion of “truth” depends on the opinions of the participants in the Agora, which may change over time. • As more evidence is obtained and further arguments presented to the Agora, the truth status of a claim may change. • Such changes may be non-monotonic. Representing Uncertainty with Argumentation

  22. Formal properties of the Agora: • The Agora dialogue-game rules comply with: • Alexy’s discourse rules • 15 of Hitchcock’s 18 Principles. • Acceptability of claims is a game-theoretic semantics (Hintikka 1968): • “Truth” of a proposition depends on a participant in the Agora having a strategy to defeat any opponent in the dialogue-game associated with the proposition. • Inference from finite snap-shots to the long-run is well-founded: • We can place probabilistic bounds on the possibility of errors of inference from finite snapshots to values at infinity. • This is analogous to the Neyman-Pearson (1928) theory of statistical inference. Representing Uncertainty with Argumentation

  23. Snapshots Agora debate Time Status for Claim P: Open Probable Accepted Plausible Inference from snapshots to infinite status (With apologies to Jackson Pollock) Representing Uncertainty with Argumentation

  24. Theorem:Stability of labels in absence of new information. Let P be a claim. Suppose that: • A(P) is a consistent argument for P such that all rebuttals and undercuts against A(P) are themselves attacked by other arguments, • All arguments pertaining to P using the initial information and inference rules are eventually articulated by participants within the Agora, and • No new information concerning P is received by participants following commencement. Then: • The uncertainty label for P converges to “Accepted” as time goes to infinity. Representing Uncertainty with Argumentation

  25. Key Theorem: Probability of Inference Errors is bounded. Consider a claim P. Suppose that: • The uncertainty label for P converges to a limit at infinity, • A snapshot is taken at a time t after all relevant arguments related to P have been presented, • The uncertainty label of P at time t is “Accepted”, and • The probability of new information relevant to P arising after time t is less than , for some 0 <  < 1. Then: The probability that the uncertainty label for claim P at infinity is also “Accepted” is at least 1 - . Representing Uncertainty with Argumentation

  26. Example: • Assumptions: • K1: The chemical X is produced by the human body naturally (it is endogenous). • K2: X is endogeneous in rats. • K3: An endogenous chemical is not carcinogenic. • K4: Bioassays of X on rats show significant carcinogenic effects. • Rules of inference: • R1 (And Introduction): From P and Q, infer (P &Q). • R2 (Modus Ponens): From P and (P implies Q) infer Q. • R3: If a chemical is carcinogenic in an animal species, infer that it is also carcinogenic in humans. Representing Uncertainty with Argumentation

  27. Example (cont): A dialogue concerning the statement P = “X is carcinogenic to humans” • Snapshot status of Claim P: Open • assert (Participant 1: (P, confirmed) ) • query (Participant 2: assert (Participant 1: (P, confirmed))) • show_arg (Participant 1: (K4, R3, P, (Confirmed, Valid, Confirmed)) • Snapshot status of Claim P: Accepted • contest (Participant 2: assert (Participant 1: (P, confirmed))) • query [Participant 3: contest (Participant 2: assert (Participant 1: (P, confirmed)))) • propose (Participant 2: (not-P, Plausible)) • query [Participant 1: propose (Participant 2: (not-P, Plausible)) • show_arg (Participant 2: ((K1, K3) , R2, not-P, (Confirmed, Probable, Valid, Plausible))) • Snapshot status of Claim P: Plausible Representing Uncertainty with Argumentation

  28. What’s next: • A model of a deliberation dialogue • Dialogues about what action(s) to take. • Have proposed a model based on Wohlrapp’s (1998) retroflexive argumentation, a model of non-deductive inference (joint work with David Hitchcock). • Locutions specific to regulatory domain • Have proposed a first set using Habermas’ (1981) Theory of Communicative Action. • A means to combine different types of dialogue • Have proposed a formalism using Parikh’s (1985) Game Logic, a version of Dynamic Modal Logic (the modal logic of processes). • A qualitative decision theory • Will draw on Fox and Parsons (1998). Representing Uncertainty with Argumentation

  29. Other formal properties under exploration: • Can we automate these dialogues? • Will automated dialogues ever terminate? • Under what circumstances? • After how many moves? (Computational complexity). • When are two dialogues the same? • How do we assess the quality of a dialogue system? • How sensitive is the framework to changes in the game rules? Representing Uncertainty with Argumentation

  30. Thanks to: • EPSRC • Grant GR/L84117: Qualitative Decision Theory • Grant GR/N35441/01: Symposium on Argument and Computation • Phd Studentship. • European Union Information Society Technologies Programme (IST): • Sustainable Lifecycles in Information Ecosystems (SLIE) (IST-1999-10948). • Trevor Bench-Capon, Computer Science Dept, University of Liverpool. • John Fox, Advanced Computation Laboratory, Imperial Cancer Research Fund, London. • David Hitchcock, Philosophy Dept, McMaster University, Hamilton, Ontario. • Anonymous referees (UAI, GTDT, AMAI). Representing Uncertainty with Argumentation

More Related