830 likes | 966 Views
Artificial intelligence. Information. Instructor – Johnson Thomas 325 North Hall Tel: 918 594 8503 e-mail : jpt@cs.okstate.edu Office hours: Monday 5-7pm Tuesday 2-4pm Alternates between Tulsa and Stillwater. Information. TA – zang, yihong E-mail:zangyihong@hotmail.com
E N D
Information • Instructor – Johnson Thomas • 325 North Hall • Tel: 918 594 8503 • e-mail : jpt@cs.okstate.edu • Office hours: • Monday 5-7pm • Tuesday 2-4pm Alternates between Tulsa and Stillwater
Information • TA – zang, yihong • E-mail:zangyihong@hotmail.com • Office hours – to be decided
Information • Book • artificial intelligence: a modern approach Russell and Norvig Prentice Hall
Information • Practical x 3 – 45 marks (3 x 15) • Homework x 3 – 30 marks (3 x 10) • Mid-Term – 30 marks • Final Examination – 35 marks
introduction • What is AI? • Many definitions • Thought processes and reasoning • Behavior • Thought and behavior like humans • Ideal thought and behavior – rationality • A system is rational if It does the ‘right thing given what it knows
Introduction • Systems that think like humans • Computers with minds • Automation of activities that we associate with human thinking e.g. decision-making, problem solving, learning , … • Systems that act like humans • Machines that perform the functions that require intelligence when performed by people • Make computers do things at which, at the moment, people are better
Introduction • Systems that think rationally • The study of mental faculties through the use of computational models • The study of the computations that make it possible to perceive, reason, and act • Systems that act rationally • Study of the design of intelligent agents • Concerned with intelligent behavior in artifacts
Introduction • Acting humanly – Turing test • The computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or not.
Introduction • Therefore, the computer would need to possess the following : • Natural language processing • Communication • Knowledge representation • Store knowledge • Automated reasoning • Use stored information to answer questions and draw new conclusions • Machine learning • Adaptation and pattern recognition/extrapolation
Introduction • Turing test – no physical interaction • Total Turing test • Interrogator can test subject’s perceptual abilities • Need computer vision to perceive objects • Interrogator can pass physical objects • Need Robotics to manipulate objects and move about
Introduction • Thinking humanly – cognitive modeling • Understand the workings of human minds • Once we have a theory of the mind, we can express the theory as a computer program • Cognitive science – brings together computer models from AI and experimental techniques from psychology
Introduction • Thinking rationally – laws of thought • Logic • Difficult in practice • Knowledge can be informal
Introduction • Acting rationally – rational agents • agent – something that acts • More than just a program • Can perceive their environment • Can adapt to change • Operate under autonomous control • Persists over a prolonged time. • Can take on another’s goals • Rational agent – acts to achieve best outcome possible
History Subjects which have contributed to AI • Philosophy 428 BC – present • Mathematics 800-present a • Economics 1776-present • Neuroscience 1861 – present • Psychology 1879-present • Computer engineering of 1940-present • Control theory and cybernetics 1948-present • linguistics 1957-present
History • First AI work - McCulloch and Pits (1943) • Model of artificial neurons • Based on • Physiology and functions of the brain • Analysis of propositional logic • Turing’s theory of computation • Each neuron is ‘on’ or ‘off’ • Neuron switches ‘on’ in response to stimulation by a sufficient number of neighboring neurons
History • Hebb demonstrated an updating rule for modifying the connection strengths between neurons • Minsky and Edwards built the first neural network computer – 1950 • Turing’s article in 1950 – “computing machinery and intelligence” – introduced Turing test, machine learning, genetic algorithms, enforcement learning
History • Birthplace of AI – meeting organized by McCarthy at Dartmouth college in 1956. • Some early programs • Logic theorist (LT) for proving theorems • General Problem Solver (GPS) – imitates human problem solving protocols • Physical symbol system hypothesis –any system exhibiting intelligence must operate by manipulating data structures composed of symbols
History • Geometric theorem prover (1959) • Program for checkers • McCarthy (1958) • Defined Lisp • Invented time sharing • Advice taker program – first program to embody general knowledge of the world • Minsky – a number of micro world projects that require intelligence • Blocks world – rearrange blocks In a certain way, using a robot hand that can pick up one block at a time
History • vision project • Vision and constraint propagation • Learning theory • Natural language understanding • Planning • Perceptrons
History • Use of domain specific knowledge that allows larger reasoning steps • Dendral program – Inferring molecular structure from the information provided by a mass spectrometer • Expert systems for medical diagnosis – mycin • No theoretical framework • Knowledge acquired by Interviewing experts • Rules reflected uncertainty associated with medical knowledge
History • First commercial expert system R1 – DEC • For configuring computers • Every major U.S. corporation had expert systems • Japanese fifth generation project • US microelectronics and computer technology corporation (MCC) • Britain’s Alvey’s program
History • Hopfield – statistical approaches for neural networks • Hinton – neural net models of memory • so-called connectionist models (as opposed to symbolic models promoted earlier)
History • AI is now a science – based on rigorous theorems or hard experimental evidence
Applications • Autonomous planning and scheduling • NASA’s remote agent program • controls scheduling of operations for a spacecraft. • Plans are generated from high level goals specified from the ground. • Monitors operation of the spacecraft as the plans are executed • Game playing • IBM’s deep Blue – defeated world champion
Applications • Autonomous control • ALVINN computer vision system to steer a car • Diagnosis • Medical diagnosis programs based on probabilistic analysis • Logistics planning • Logistics planning and scheduling for transportation in gulf war
Applications • Robotics • Language understanding and problem solving
Agents • Agent – anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators • An agent is a function from percept histories to an action f : P* A
Agents • Human agent • Ears, eyes, … for sensors • Hands, legs, … for actuators • Robotic agent • Cameras … for sensors • Motors … for actuators • Software agents • Keystrokes, file contents, network packets … as sensory inputs • Display on the screen, writing files, sending network packets …
sensors Environment Agent percepts ? actions actuators
Agents • Percept – refers to the agents perceptual inputs at any given instant • Percept sequence – complete history of everything the agent has perceived • An agent’s choice of action at any given instant can depend on the entire percept sequence observed to date • Agent function - maps any given percept sequence to an action. This function describes the behavior of an agent
Agents • Agent function - an abstract mathematical description • Agents program – an implementation
Agents • Vacuum cleaner world • Two locations: squares A and B • Vacuum agents perceives which square it is in and whether there is dirt in the square • Vacuum agent can choose to move left, move right, Suck dirt, do nothing • Agent function – if current square is dirty, then suck , otherwise move to the other square • Tabulation of agent function
B A
Agents • Partial tabulation of simple agent function Percept sequence action ================ ======= [A, clean] right [A, dirty ] suck [B, clean] left [B, dirty ] suck [A, clean], [A, clean] right [A, clean], [A, dirty] suck … … [A, clean], [A, clean], [A, clean] right [A, clean], [A, clean], [A, dirty] suck … …
Agents • Rational agent – does the right thing, that is, every entry in the table for the agent function is filled out correctly • Performance measure – criterion for success of an agent’s behavior • Sequence of actions based on percepts it received • Sequence of actions causes the environment to go through a sequence of states • If the sequence Is desirable, agent has performed well
Agents • Vacuum cleaner agent – measure performance by the amount of dirt cleaned up in a single eight hour shift • Clean, dump dirt, clean, dump dirt …? • More suitable performance measure -reward agent for having a clean floor • Design performance measures according to what one wants in the environment - not according to how one thinks the agent should behave
Agents • Rationality • What is rational at any given time depends on • performance measure that defines the criterion of success • Agent’s prior knowledge of the environment • Actions that the agent can perform • Agent’s percept sequence to date
Agents • Definition For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has
Agents • Vacuum cleaner agent – cleans the square if it is dirty and moves to the other square if it is not • Is this a rational agent?
Agents • What is its performance measure • Performance measure awards one point for each clean square at each time step over a lifetime of 1000 times steps • What is known about the environment • Geography of environment is known a priori. Dirt distribution and initial location of agent are not known. Clean squares stay clean. Sucking cleans the current square. Left and right move the agent
Agents • What sensors and actuators does it have • Available actions – left, right, suck, NoOp • Agent perceives Its location and whether that location contains dirt • Under these conditions, the agent is rational
Agents • Note: • rationality maximizes expected performance • Perfection maximizes actual performance • Rational agent • Gathers information • Learns as much as possible from what it perceives
Agents • Task of computing the agent function • When the agent is being designed, some of the computation is done by its designers • When it is deliberating on its next action, the agent does more computation • The agent learns from experience – it does more computation as it learns
Agents • Autonomous – the agent should learn what It can to compensate for partial or incorrect prior knowledge • Vacuum cleaning agent that learns to foresee where and when additional dirt will appear will do better than one that does not • Agent will have some initial knowledge as well as an ability to learn. Initially it will make mistakes.
Agents • Specifying the task environment • Vacuum cleaning agent • Performance measure, and environment, sensors and actuators (PEAS)– this is the task environment • Automated taxi driver • Performance measure – getting to correct destination; minimizing fuel consumption , wear and tear, trip time, cost, traffic law violations; maximizing safety, comfort, profits • Environment – variety of roads, traffic, pedestrians, road works, etc. Interact with the passengers, snow
Agents • Actuators – accelerator, steering, braking; output to a display screen or voice synthesizer; communicate with other vehicles • Sensors – controllable cameras, speedometer odometer, accelerometer, engine and other system sensors , GPS, sensors to detect distances to other cars and obstacles, keyboard or microphone for the passenger to request a destination
Agents • Properties of task environments • Fully observable vs partially observable • Fully observable – If agents sensors give it access to the complete state of the environment at each point in time • Partially observable – because of noisy and inaccurate sensors or because parts of the state are missing from the sensor data