470 likes | 619 Views
Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli Bhoor S Raj Meena. The human decision making ability is a non-trivial activity and it would be an AI marvel to capture it Every human being has a notion of having made a decision
E N D
Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli Bhoor S Raj Meena
The human decision making ability is a non-trivial activity and it would be an AI marvel to capture it • Every human being has a notion of having made a decision • Before making any decision, people reason • We will look at human decision making with the intention of mimicking it through AI • First lets look at some of varying views of AI
Science of designing and building computer based artifacts which can perform various human tasks • this view has few links with decision making • a decision, if any, has previously been made by the designer of the system • All in all, the concept of 'decision' is in conflict with the idea of a program
Science aimed at mimicking human beings • Needs to incorporate human decision making skills • Human beings have preferences and make subjective decisions • AI becomes a subjective science rather than generic science
Current state of the world More desirable (future) state Decision problem
Before making a decision, the subject recognizes the current state, which contains information about the past and the present • keeping in mind his perception of the current state, the subject tries to identify it with reference to his experience (recorded states) • The first phase of decision is then to find one or more recorded states close to the perceived current state. This is called 'pattern matching' or 'diagnosis‘
Let ‘E’ (expectations) be a representation of the future events uncontrolled and uninfluenced by the subject • ‘A’ denote the set of all possible actions • FS(Si, A, E) defines the set of all future states attainable from the current state Si • Depending on the various expectations, many states are attainable with different probabilities • the preferred outcome, of all the outcomes OC, defines the action to be chosen
Chosen Action Diagnosis Perceived current state encompassing past and present File of recorded states (Subjective) Preferences P Look Ahead (Subjective) Expectations E Recognized Future States File of recorded states Actions A
the current state may not known with certainty and may not be unique • future states may not be known with certainty • action (or alternative) set is not given and can be changed during the process of reasoning • many real decision makers study just a small subset of the possible alternatives. • the states of nature or the consequences are not easy to determine • decision process is not linear, many back tracks can occur • distinction between action and outcome may become vague
We may have very reactive systems, where each action is almost immediately followed by a modification of the state and a new decision • In such systems, the role of the environment is very weak • two types of decisions on basis of granularity of time difference between consecutive decisions - 'almost continuous decision' and 'discrete decision‘ • The significance of an outcome for a subject frequently involves many sub-actions leading to the outcome • sequence of sub-actions intertwined with events gives a scenario
Diagnosis is the problem of recognizing the current state as accurately as possible. • Current state is subject to evaluation and comparison. • Classification is a tool for decision making. This method can be applied if the number of actions is finite and phi exists. The decision can thus be phi(Si) = Aj, based on the attributes characterizing Si. phi-1(Aj) realizes a partition of S in this case.
An Expert system is essentially a diagnostic machine. Here we consider the semantics of their working. • The input facts describe a given situation (current state) and the output is either a diagnosis (eg the patient is suffering from a certain illness (MYCIN), the situation of the client is quite good (Risk Manager) etc) or a recommendation or an action (eg increase the flow of oil, decrease the temperature of the kiln etc)
Chosen Action Preferences Diagnosis Looking Ahead Current State Outcome Set phi Simplified Decision Process As shown in the figure expert systems shunt the look-ahead step.
Rough Set Theory : The Rough Sets methodology provides definitions and methods for finding which attributes separates one class or classification from another. • Since inconsistencies are allowed and membership in a set does not have to be absolute, the potential for handling noise gracefully is big.
Example: • Consider a table:
Using this training data we want to use rough sets to derive some rules that will enable us to determine the structure of a novel protein given the attributes describing that protein. • In rough set speak, the structure will be the decision attribute and % Gln, % Pro and + Chg are the condition attributes.
Note that equivalence classes can contain ids that have different decision attributes (i.e. structures). • The next step is to construct a discernability matrix, which is:
The axes are the equivalence classes and the cells contain the condition attributes that differentiate between those classes. • The relative discernability functions here are: • * f(E1) = (% Gln) AND (% Pro OR +Chg) • * f(E2) = (% Gln) AND (% Gln OR % Pro OR +Chg) • * f(E3) = (% Pro OR +Chg) AND (% Gln OR % Pro OR +Chg)
The relative reduct is calculated by taking the relative discernability functions and removing superfluous attributes: • * RED(E1) = (% Gln AND % Pro) OR (% Gln AND +Chg) • * RED(E2) = % Gln • * RED(E3) = % Pro OR +Chg • Now to derive some rules from our reducts we need to bind the condition attribute values of the equivalence class from which the reduct originated to the corresponding attributes of the reduct.
Using the rules we have generated we can determine the structure of novel proteins. • Each type of structure will have • -a lower approximation - the set of proteins which definitely have that structure • - an upper approximation - the set of proteins which may possibly have that structure • -a boundary region, the set of proteins whose structure cannot be proven either way. • Proteins can belong to more than one set.
e.g., all of the proteins making up equivalence class E1 have the condition attributes 12%, 6% and 0.2 for % Gln, % Pro and +Chg respectively. We can feed those values into RED(E1) to derive a relevant rule. • * from RED(E1) : if Gln = 12 and Pro = 6 => structure = all-a • * from RED(E1) : if Gln = 12 and+Chg = 0.2 => structure = all-a • * from RED(E2) : if Gln = 8 => structure = all-b • * from RED(E3) : if Pro = 2 => structure = (2/3 chance of a+b, 1/3 chance a / b) • * from RED(E3) : if+Chg = 0.12==> structure = (2/3 chance of a+b, 1/3 chance a / b)
Difference between a Rough Set and a Crisp(Normal) Set A rough set consists of a tuple <Pl, Pu> where Pl, Pu are the crisp sets pertaining to the lower and upper approximation for a protein structure. Eg Pl(all-a)={E1}, Pu(all-a)={E1}→Accuracy=1 Pl(a+b) = {}, Pu(a+b) = {E3} → Accuracy=0 Hence a rough set for all-a protein structure would be {{E1}, {E1}}
What are Goals and Plans? • Planning with certainty • STRIPS • Soar • Planning with uncertainty • Non-monotonic logics • Decision-theoretic Planning
The goal is the 'outcome' that the decision maker wants to obtain. • Planning is, given a goal and the current state, finding a sequence of actions (or sub-actions) which leads to the goal from the current state. • Enrichment of notions of goal and plan.
A goal reduced to a particular outcome, does not involve other human decisions like intention and commitment. • A way of encoding intention is Preferences • Jon Doyle proposes priorities instead, but distinction between preferences and priorities is not clear in many 'intelligent' systems that used them.
More complex utility function capable of dealing with: • Pursuing several goals • Consumption of resources by (sub-)goals satisfied • The utility function is simply the weighted sum of the utilities of the various goals minus a term depending on the resources used by the partial outcomes possibly attained with probabilities.
State consists of two parts: the state of the environment, the agent's state among all possible mental attitudes. • A mental attitude consists of beliefs, desires, intentions expressed possibly by probabilities, utilities, priorities respectively. • If utilities and priorities are included into utilities and probabilities into expectations, this model is very similar to Decision-theoretic Model described previously.
Many algorithms are based on regressive search • Best example: STRIPS (Nilsson) • In STRIPS, a stack of (sub-)goals to be realized is maintained. • Sub-goals as predicted by the LHS of a rule that's leading to a goal, are piled above the goal. • Continue until the current state matches the precondition of a sub-goal completely.
Most applications of STRIPS function in fully observable worlds. e.g. Robot block problem • No preferences of goals. Just one goal is set. • Still, regressive search is a useful & very basic mechanism in Decision Theory esp. Human Decision Making since it's very human to think goal-driven.
AI proposes two solutions for decision making in dynamic worlds: • Planning under Uncertainty (or Decision-theoretic Planning) • Reactive Systems • Reactive Systems can generate meaningful meta-behaviour from very individualistic decisions (e.g. 'Invisible Hand' in Economics)
Two views (contrasted to ways) of planning under uncertainty: • The first one based on non-monotonic logics • The second one based on the theory of decision • Doyle and Wellman concluded, based on Arrow's impossibility theorem, that "there is probably no universally acceptable method for rationally resolving conflicts in default reasoning"
A default theory is a pair <D, W>. W is a set of logical formulae, called the background theory, that formalize the facts that are known for sure. D is a set of default rules, each one being of the form: PreReq: {Justif} / Conclusion. • The rule is to be interpreted as, if PreReq is true, {Justif} does not conflict with W, then Conclusion is believed to be true. • e.g. D= {Bird(X):Flies(X)/Flies(X)} W={Bird(Penguin), Bird(Kingfisher), ~Flies(Penguin)}
No voting system can convert the ranked preferences of individuals into a community-wide ranking while also meeting a certain set of reasonable criteria with three or more discrete options to choose from. • e.g. Criteria: if you have an election where C wins among A and C, and you introduce a new candidate B. Then either C should still win, or B should now win. • Preferences: 40 A B C 35 B C A 25 C A B • Consider, "what if B wasn't running?" • You would have had an election like this • 40 A C 60 C A
Doyle and Wellman (1989) have noticed that expressing a default rule may be interpreted as expressing preferences between propositions: The subject prefers to believe R more than (not Q). • So, Doyle concludes by this very similarity between preferences and default logic rules, rational reasoning can't be achieved by default logic.
Decision Machine • One to one correspondence between diagnosed current state and an action. • Improper decision • continuous decision • Programmed decision machine • relating current state to action does not capture all complexity of human decision • undesirable effects
human decision maker is always indispensable >>set of all the possible current states can not be described either extensionally or intentionally. • (Reason: unexpected states) • challenges in the development of decision support system: • The designer of decision support system are therefore confronted with the paradoxical problem to develop system capable of helping people in future situation
The What-if Analysis: • The dissatisfaction stems from what we have identified as lookahead reasoning • event are either very interdependent and the probability remain unknown(real situation:price of oil) • second difficulty is predicting or even identifying all possible reaction of the other agents. • The ability to perceive the future seems to be a phylogenetic acquisition. • the capacity for anticipation and the ability to decide against immediate short term advantage to allow future gains.
Brain is: 1)a predictive biological machine 2)a simulator of action by referring to past experience 3) extremely specialized circuits. Free will :capacity to internally simulate action and make a decision this is what-if analysis. Whatif analysis is the basis of human ability to perform lookahead reasoning. Scenario reasoning: develop many scenarios and to assess ,at least approximately ,their probabilities. the consequences of a choice in decision making amounts.
It should produce two outputs: • all possible outcomes at a given horizon • probability or plausibility of each outcome why machines are necessary here: • scenario reasoning may lead to a combinatorial explosion • almost impossible to handle long , precise and diverse scenarios.
Look-ahead Machine • necessary capabilities. • 1)the ability to combine many action and events (with probabilities or measures) • 2)ability to imagine the possible action and to anticipate all possible reactions of the other agents or natures. • Imagination ability is provides by file of recorded states. • All possible events and reactions of the other agents are drawn from a set of memorized items. • The intrinsic difficulty of forecasting is the main weakness of many formalized planning process.
Candidates for lookahead machines: • 1Simulation machines • 2DSS (decision support systems) • Simulation machine: • a real industrial or social process has been modeled on reduces scale • Some variables characterizing uncertain events are randomly fed in to the system according to a given probability law. • way of looking ahead when it is impracticable to model the process completely (hard modeling) • subjects are insensitive to the implication of feedback when medium or long delays occur between a decision and its effects.
DSS: previously Dss stated as multimodal, interactive system to perform an exploration convinced as a lookaheadmachine At input data level ,it is called what-if or sensitivity analysis. at model level heuristic search allows the decision maker to explore different types of models to look ahead among the many possible situation that may occur.(air traffic control) DSS is an incomplete one decision is left to the decision maker. to perform the numerous evaluations that occurs along the exploration process. Rather than focus in on choice , designers would do better to make richer scenario by being able to produce and to handle complex actions and situations.
AI contribution to DSS: • ability to put forward better and more sophisticated representation allowing more complex states and reasoning to handled. • Modeling process of unstructured task(ex: expert technical system) • Junction between the set of recorded states and the generation of the scenario. • Link necessarily encompass some learning • learning process plays a significant role in the human mind.
Decision theory and AI complement each other and are just beginning to merge • AI has devoted much attention on diagnosis and on representing human knowledge but not much work has gone into the look-ahead phase of decision making • As of know, most of AI work starts after a decision has been made, but one cannot simulate human reasoning without taking into uncertainty and preferences that goes into human decision making • In the end, we would like to say that there is no doubt that diagnosis plus look-ahead machines have a brilliant future, if not to mimic human reasoning, at least to support human decision.
Artificial Intelligence and Human Decision Making (1997) by Jean-charles Pomerol, European Journal of Operational Research • Wikipedia – Decision Trees, Rough Sets, Arrow Theorem • Rough Sets: http://www.pw.ntnu.no/~hgs/project/report/node38.html