420 likes | 433 Views
This lecture provides an overview of structured argumentation, including how arguments are constructed, attacked, and defeated. It also discusses the application of argumentation in the context of AI and law. (500 characters)
E N D
Spring School on Argumentation in AI & LawDay 1 – lecture 2Structured argumentation Henry Prakken Guangzhou (China) 10 April 2018
Overview • Structured argumentation: • Arguments • Attack • Defeat
From abstract to structured argumentation A B E D C P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games. Artificial Intelligence, 77:321–357, 1995.
From abstract to structured argumentation • Dung defines status of arguments given a set of arguments and a defeat relation • But we must also have a theory on • How can we construct arguments? • How can we attack (and defeat) arguments?
Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs No safety instructions Attack on conclusion
Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs No safety instructions Employee had no work-related injury Attack on premise Injury caused by poor physical condition
Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs No safety instructions Employee had no work-related injury Colleague is not credible Attack on inference Colleague says so Injury caused by poor physical condition C is friend of claimant
Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague is not credible Colleague says so Camera evidence Injury caused by poor physical condition Indirect defence C is friend of claimant
Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague is not credible Colleague says so Camera evidence Injury caused by poor physical condition C is friend of claimant
From abstract to structured argumentation A B E D C P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games. Artificial Intelligence, 77:321–357, 1995.
A B Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague is not credible Colleague says so Camera evidence Injury caused by poor physical condition E C is friend of claimant C D
Two accounts of the fallibility of arguments Tony Hunter Nicholas Rescher • Plausible Reasoning: all fallibility located in the premises • Assumption-based argumentation (Kowalski, Dung, Toni,… • Classical argumentation (Cayrol, Besnard, Hunter, …) • Defeasible reasoning: all fallibility located in the defeasible inferences • Pollock, Loui, Vreeswijk, Prakken & Sartor, … • ASPIC+ combines these accounts Robert Kowalski John Pollock
ASPIC+ framework: overview Argument structure: • Inference graphs where • Nodes are wff of a logical language L • Links are applications of inference rules • Rs = Strict rules (1, ..., n ); or • Rd= Defeasible rules (1, ..., n ) • Reasoning starts from a knowledge base K L • Defeat: attack on conclusion, premise or inference, + preferences • Argument acceptability based on Dung (1995)
Argumentation systems (with symmetric negation) An argumentation system is a triple AS = (L,R,n) where: L is a logical language with negation (¬) R = Rs Rd is a set of strict (1,…, n ) and defeasible (1,…, n ) inference rules n: RdL is a naming convention for defeasible rules Notation: - = ¬ if does not start with a negation - = if is of the form ¬
Knowledge bases A knowledge base in AS = (L,R,n) is a set K L where K is a partition Kn Kpwith: Kn = necessary premises Kp = ordinary premises
Argumentation theories An argumentation theory is a pair AT = (AS, K) where AS is an argumentation system and K a knowledge base in AS.
Structure of arguments TopRule(A) Gerard Vreeswijk • An argumentA on the basis of an argumentation theory is: • if K with • Prem(A) = {}, Conc(A) = , Sub(A) = {}, DefRules(A) = • A1, ..., An if A1, ..., An are arguments such that there is a strict inference rule Conc(A1), ..., Conc(An) • Prem(A) = Prem(A1) ... Prem(An) • Conc(A) = • Sub(A) = Sub(A1) ... Sub(An) {A} • DefRules(A) = DefRules(A1) ... DefRules(An) • A1, ..., An if A1, ..., An are arguments such that there is a defeasible inference rule Conc(A1), ..., Conc(An) • Prem(A) = Prem(A1) ... Prem(An) • Conc(A) = • Sub(A) = Sub(A1) ... Sub(An) {A} • DefRules(A) = DefRules(A1) ... DefRules(An) {A1, ..., An} TopRule(A)
Rs: Rd: p,q s p t u,v w s,r,t v Kn = {q} Kp = {p,r,u} A1 = p A5 = A1 t A2 = q A6 = A1,A2 s A3 = r A7 = A5,A3,A6 v A4 = u A8 = A7,A4 w w u, v w Rs p v u p, q s Rs s,r,t v Rd p t r s p t Rd p n p p q p
Types of arguments An argument A is: Strict if DefRules(A) = Defeasible if not strict Firm if Prem(A) Kn Plausible if not firm
Rs: Rd: p,q s p t u,v w s,r,t v Kn = {q} Kp = {p,r,u} A1 = p A5 = A1 t A2 = q A6 = A1,A2 s A3 = r A7 = A5,A3,A6 v A4 = u A8 = A7,A4 w w p v u p • An argument A is: • - Strict if DefRules(A) = • - Defeasible if not strict • - Firm if Prem(A) Kn • - Plausible if not firm t r s p n p p q p
Attack • AunderminesB (on ) if • Conc(A) = -for some Prem(B )/ Kn; • ArebutsB (on B’) if • Conc(A) = -Conc(B’) for some B’ Sub(B) with a defeasible top rule • AundercutsB (on B’) if • Conc(A) = -n(r )’for some B’ Sub(B ) with defeasible top rule r • A attacksB iff A undermines or rebuts or undercuts B.
Rs: Rd: p,q s p t u,v w s,r,t v Kn = {q} Kp = {p,r,u} A1 = p A5 = A1 t A2 = q A6 = A1,A2 s A3 = r A7 = A5,A3,A6 v A4 = u A8 = A7,A4 w w p v u p t r s p n p p q p
Structured argumentation frameworks Let AT = (AS,K) be an argumentation theory A structured argumentation framework (SAF) defined by AT is a triple (Args,C, a) where Args = {A | A is an argument on the basis of AT} C is the attack relation on Args ais an ordering on Args (A <aB iff AaB and not BaA) A c-SAF is a SAF in which all arguments have consistent premises
Defeat AunderminesB (on ) if Conc(A) = - for some Prem(B )/ Kn; ArebutsB (on B’) if Conc(A) = -Conc(B’) for some B’ Sub(B ) with a defeasible top rule AundercutsB (on B’) if Conc(A) = -n(r)’for some B’ Sub(B ) with defeasible top rule r A defeatsB iff for some B’ A undermines or rebuts B on B’and not A <aB’; or A undercuts B on B’ Which if B’ = means and not A <a Direct vs. subargument attack/defeat Preference-dependent vs. preference-independent attacks 24
Abstract argumentation frameworks corresponding to SAFs An abstract argumentation framework corresponding to a (c-)SAF = (Args,C, a) is a pair (Args,D) where D is the defeat relation on Args defined by C and a. 25
Argument preference • In general its origin is undefined • General constraint: A<a B if B is strict-and-firm and A is defeasible or plausible. • Sometimes defined in terms of partial preorders (on Rd) and ’ (on Kp) • Origins of and ’: domain-specific! • Some possible criteria: • Probabilistic strength • Legal priority rules • Importance of legal or moral values • …
Which inference rules should we choose? A tradition in AI: the inference rules encode domain-specific knowledge A philosophically more well-founded approach: the inference rules express general patterns of reasoning Strict rules (+ axioms): a sound and complete proof theory of a ‘standard’ logic for L Defeasible rules: argument schemes.
Domain-specific vs. inference general inference rules d1: Bird Flies s1: Penguin Bird Penguin K Rd = {, } Rsincludes{S | S |-PL and S is finite} Bird Flies K Penguin Bird K Penguin K Flies Bird Penguin Flies Bird Bird Flies Penguin Bird Penguin
Deriving the strict rules from a monotonic logic • For any logic L with (monotonic) consequence notion that is compact and satisfies Cut |-L define S p Rs iff S is finite and S |-L p
Argument(ation) schemes: general form But also critical questions Douglas Walton Premise 1, … , Premise n Therefore (presumably), conclusion
Logical vs. dialogical aspects of argument schemes • Some critical questions ask “why this premise?” • Other critical questions ask “is there no exception?” • But burden of proof is on respondent to show that there are exceptions! • One cannot ask such questions; one can only state counterarguments
Argument schemes in ASPIC • Argument schemes are defeasible inference rules • Critical questions are pointers to counterarguments • Some point to undermining attacks • Some point to rebutting attacks • Some point to undercutting attacks
Reasoning with defeasible generalisations But defaults can have exceptions And there can be conflicting defaults P If P then normally/usually/typically Q So (presumably), Q - What experts say is usually true - People with political ambitions are usually not objective about security - People with names typical from country C usually have nationality C - People who flee from a crime scene when the police arrives are normally involved in the crime - Chinese tea is usually very good 33
Legal rule application IF conditions THEN legal consequence conditions So, legal consequence
Legal rule application:critical questions • Is the rule valid? • Is the rule applicable to this case? • Must the rule be applied? • Is there a statutory exception? • Does applying the rule violate its purpose? • Does applying the rule have bad consequences? • Is there a principle that overrides the rule?
Analogy • Critical questions: • Are there also relevant differences between the cases? • Are there conflicting precedents? Relevantly similar cases should be decided in the same way This case is relevantly similar to that precedent Therefore (presumably), this case should have The same outcome as the precedent
Arguments from consequences Critical questions: Does A also have bad (good) consequences? Are there other ways to bring about G? ... Action A causes G, G is good (bad) Therefore (presumably), A should (not) be done
Example (arguments pro and con an action) We should make spam a criminal offence We should not make spam a criminal offence Reduction of spam is good Making spam a criminal offence reduces spam Making spam a criminal offence increases workload of police and judiciary Increased workload of police and judiciary is bad
Example (arguments pro alternative actions) We should make spam a criminal offence We should make spam civilly unlawful Making spam a criminal offence reduces spam Making spam civilly unlawful reduces spam Reduction of spam is good Reduction of spam is good
Refinement: promoting or demoting legal/societal values Critical questions: Are there other ways to cause G? Does A also cause something else that promotes or demotes other values? ... Action A causes G, G promotes (demotes) legal/societal value V Therefore (presumably), A should (not) be done
Example (arguments pro and con an action) We should save DNA of all citizens We should not save DNA of all citizens Solving more crimes promotes security Saving DNA of all citizens leads to solving more crimes Saving DNA of all citizens makes more private data publicly accessible Making more private data publicly available demotes privacy
Example (arguments pro alternative actions) We should save DNA of all citizens We should have more police Solving more crimes promotes security Saving DNA of all citizens leads to solving more crimes Having more police leads to solving more crimes Solving more crimes promotes security