310 likes | 622 Views
Foundations of Artificial Intelligence. 2. Some Definitions of AI. Building systems that think like humans
E N D
1. Introduction to AI andIntelligent Agents Foundations of Artificial Intelligence
2. Foundations of Artificial Intelligence 2
3. Foundations of Artificial Intelligence 3 Some Definitions of AI Building systems that think rationally
The study of mental faculties through the use of computational models -- Charniak and McDermott, 1985
The study of the computations that make it possible to perceive, reason, and act -- Winston, 1992
Building systems that act rationally
A filed of study that seeks to explain and emulate intelligent behavior in terms of computational processes -- Schalkoff, 1990
The branch of computer science that is concerned with the automation of intelligent behavior -- Luger and Stubblefield, 1993
4. Foundations of Artificial Intelligence 4 Thinking and Acting Humanly Thinking humanly: cognitive modeling
Develop a precise theory of mind, through experimentation and introspection, then write a computer program that implements it
Example: GPS - General Problem Solver (Newell and Simon, 1961)
trying to model the human process of problem solving in general
Acting humanly
"If it looks, walks, and quacks like a duck, then it is a duck
The Turing Test
interrogator communicates by typing at a terminal with TWO other agents. The human can say and ask whatever s/he likes, in natural language. If the human cannot decide which of the two agents is a human and which is a computer, then the computer has achieved AI
this is an OPERATIONAL definition of intelligence, i.e., one that gives an algorithm for testing objectively whether the definition is satisfied
5. Foundations of Artificial Intelligence 5 Thinking and Acting Rationally Thinking Rationally
Capture ``correct'' reasoning processes
A loose definition of rational thinking: Irrefutable reasoning process
How do we do this
Develop a formal model of reasoning (formal logic) that always leads to the right answer
Implement this model
How do we know when we've got it right?
when we can prove that the results of the programmed reasoning are correct
soundness and completeness of first-order logic
Acting Rationally
Act so that desired goals are achieved
The rational agent approach (this is what well focus on in this course)
Figure out how to make correct decisions, which sometimes means thinking rationally and other times means having rational reflexes
correct inference versus rationality
reasoning versus acting; limited rationality
6. Turings Goal Alan Turing, Computing Machinery and Intelligence, 1950:
Can machines think?
How could we tell?
7. Turings Imitation Game not new with Turing: Descartes implicitly proposed a test for distinguishing bęte and homme based on distinguishability of their verbal behaviors.
Descartes view:
Animals are automata; animal behaviors are mechanical.
People, as reveled in their flexible verbal behaviors, are not mechanical.
Machines cant talk, and therefore cant think.
But the principal argument...which may convince us that the brutes are devoid of reason, is that...it has never yet been observed that any animal has arrived at such a degree of perfection as to make use of a true language; that is to say, as to be able to indicate to us by the voice, or by other signs, anything which could be referred to by thought alone, rather than to a mere movement of nature...; which may be taken for the true distinction between man and brute.
René Descartes, Letter to Henry More, 1647
The new problem has the advantage of drawing fairly sharp line s between the physical and intellectual capacities of a man.
The question and answer method seems to be suitable for introducing almost any one of the fields of human endeavor that we wish to include.
Alan Turing, Computing Machinery and Intelligence, 1950not new with Turing: Descartes implicitly proposed a test for distinguishing bęte and homme based on distinguishability of their verbal behaviors.
Descartes view:
Animals are automata; animal behaviors are mechanical.
People, as reveled in their flexible verbal behaviors, are not mechanical.
Machines cant talk, and therefore cant think.
But the principal argument...which may convince us that the brutes are devoid of reason, is that...it has never yet been observed that any animal has arrived at such a degree of perfection as to make use of a true language; that is to say, as to be able to indicate to us by the voice, or by other signs, anything which could be referred to by thought alone, rather than to a mere movement of nature...; which may be taken for the true distinction between man and brute.
René Descartes, Letter to Henry More, 1647
The new problem has the advantage of drawing fairly sharp line s between the physical and intellectual capacities of a man.
The question and answer method seems to be suitable for introducing almost any one of the fields of human endeavor that we wish to include.
Alan Turing, Computing Machinery and Intelligence, 1950
8. Necessary versus Sufficient Conditions Is ability to pass a Turing Test a necessary condition of intelligence?
May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection. Turing, 1950
Is ability to pass a Turing Test a sufficient condition of intelligence?
9. The Turing Syllogism If an agent passes a Turing Test, then it produces a sensible sequence of verbal responses to a sequence of verbal stimuli.
If an agent produces a sensible sequence of verbal responses to a sequence of verbal stimuli, then it is intelligent.
Therefore, if an agent passes a Turing Test, then it is intelligent.
10. Memorizing all possible answers?(Berthas Machine)
11. Foundations of Artificial Intelligence 11 Exponential Growth Assume each time the judge asks a question, she picks between two questions based on what has happened so far
Questions Asked Possible responses
1 2
2 4
3 8
4 16
5 32
6 64
n 2n
12. Storage versus Length
13. Foundations of Artificial Intelligence 13 Polynomial vs. exponential time complexity
14. The Compact Conception If an agent has the capacity to produce a sensible sequence of verbal responses to an arbitrary sequence of verbal stimuli without requiring exponential storage, then it is intelligent. Exponential storage is the hallmark of memorization (as opposed to determination).
The fact that theres no constant here is useful, as compression by constant factors thus can be allowed.Exponential storage is the hallmark of memorization (as opposed to determination).
The fact that theres no constant here is useful, as compression by constant factors thus can be allowed.
15. Size of the Universe
16. Storage Capacity of the Universe Volume: (15*109 light-years)3 = (15*109*1016 meters)3
Density: 1 bit per (10-35 meters)3
Total storage capacity: 10184 bits < 10200 bits < 2670 bits
Critical Turing Test length: 670 bits < 670 characters
< 140 words < 1 minute
17. Foundations of Artificial Intelligence 17 Some Sub-fields of AI Problem solving
Lots of early success here
Solving puzzles
Playing chess
Mathematics (integration)
Uses techniques like search and problem reduction
Logical reasoning
Prove things by manipulating database of facts
Theorem proving
Automatic Programming
Writing computer programs given some sort of description
Some success with semi-automated methods
Some error detection systems
Automatic program verification
18. Foundations of Artificial Intelligence 18 Some Sub-fields of AI Language understanding and semantic modeling
One of the earliest problems
Some success within limited domains
How can we understand written/spoken language?
Includes answering questions, translating between languages, learning from written text, and speech recognition
Some aspects of language understanding:
Associating spoken words with actual word
Understanding language forms, such as prefixes/suffixes/roots
Syntax; how to form grammatically correct sentences
Semantics; understanding meaning of words, phrases, sentences
Context
Conversation
19. Foundations of Artificial Intelligence 19 Some Sub-fields of AI Pattern Recognition
Computer-aided identification of objects/shapes/sounds
Needed for speech and picture understanding
Requires signal acquisition, feature extraction, ...
Data mining and Information Retrieval
Expert Systems and Knowledge-based Systems
Designers often called knowledge engineers
Translate things that an expert knows and rules that an expert uses to make decisions into a computer program
Problems include
Knowledge acquisition (or how do we get the information)
Explanation (of the answers)
Knowledge models (what do we do with info)
Handling uncertainty
20. Foundations of Artificial Intelligence 20 Some Sub-fields of AI Planning, Robotics and Vision
Planning how to perform actions
Manipulating devices
Recognizing objects in pictures
Machine Learning and Neural Networks
Can we remember solutions, rather than recalculating them?
Can we learn additional facts from present data?
Can we model the physical aspects of the brain?
Classification and clustering
Non-monotonic Reasoning
Truth maintenance systems
21. Foundations of Artificial Intelligence 21 Fundamental Techniques of AI Knowledge Representation
Intelligence/intelligent behavior requires knowledge, which is:
Voluminous
Hard to characterize
Constantly changing
How can one capture formally (i.e., computerize) everything needed for intelligent behavior? Some questions...
How do you store all of that data in a useful way?
Can you get rid of some?
How can you store decision making steps?
Characteristics of good data representation techniques:
Captures general situation rather than being overly specific
Understandable by the people who provide it
Easily modified to handle errors, changes in data, and changes in perception
Of general use
22. Foundations of Artificial Intelligence 22 Fundamental Techniques of AI Search
How can we model the problem search space
How can we move between steps in a decision making process?
How can you find the info you need in a large data set?
Given a choice of possible decision sequences, how do you pick a good one?
Heuristic functions
Given a goal, how do you figure out what to do (planning)?
Base-level versus meta-level reasoning
How can we reason about what step to take next (in reaching the goal)?
How much do we reason before acting?
23. Foundations of Artificial Intelligence 23 AI in Everyday Life? AI techniques are used in many common applications
Intelligent user interfaces
Search Engines
Spell/grammar checkers
Context sensitive help systems
Medical diagnosis systems
Regulating/Controlling hardware devices and processes (e.g, in automobiles)
Voice/image recognition (more generally, pattern recognition)
Scheduling systems (airlines, hotels, manufacturing)
Error detection/correction in electronic communication
Program verification / compiler and programming language design
Web search engines / Web spiders
Web personalization and Recommender systems (collaborative/content filtering)
Personal agents
Customer relationship management
Credit card verification in e-commerce / fraud detection
Data mining and knowledge discovery in databases
Computer games
24. Foundations of Artificial Intelligence 24 AI Spin-Offs Many technologies widely used today were the direct or indirect results of research in AI:
The mouse
Time-sharing
Graphical user interfaces
Object-oriented programming
Computer games
Hypertext
Information Retrieval
The World Wide Web
Symbolic mathematical systems (e.g., Mathematica, Maple, etc.)
Very high-level programming languages
Web agents
Data Mining
25. Foundations of Artificial Intelligence 25 What is an Intelligent Agent An agent is anything that can
perceive its environment through sensors, and
act upon that environment through actuators (or effectors)
Goal: Design rational agents that do a good job of acting in their environments
success determined based on some objective performance measure
26. Foundations of Artificial Intelligence 26 Example: Vacuum Cleaner Agent
27. Foundations of Artificial Intelligence 27 What is an Intelligent Agent Rational Agents
An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful.
Performance measure: An objective criterion for success of an agent's behavior.
E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc.
Definition of Rational Agent:
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
Omniscience, learning, autonomy
Rationality is distinct from omniscience (all-knowing with infinite knowledge)
Choose action that maximizes expected value of perf. measure given percept to date
Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration)
An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)
28. Foundations of Artificial Intelligence 28 What is an Intelligent Agent Rationality depends on
the performance measure that defines degree of success
the percept sequence - everything the agent has perceived so far
what the agent know about its environment
the actions that the agent can perform
Agent Function (percepts ==> actions)
Maps from percept histories to actions f: P* ? A
The agent program runs on the physical architecture to produce the function f
agent = architecture + program
Action := Function(Percept Sequence)
If (Percept Sequence) then do Action
Example: A Simple Agent Function for Vacuum World
If (current square is dirty) then suck
Else move to adjacent square
29. Foundations of Artificial Intelligence 29 What is an Intelligent Agent Limited Rationality
Optimal (i.e. best possible) rationality is NOT perfect success: limited sensors, actuators, and computing power may make this impossible
Theory of NP-completeness: some problems are likely impossible to solve quickly on ANY computer
Both natural and artificial intelligence are always limited
Degree of Rationality: the degree to which the agents internal "thinking" maximizes its performance measure, given
the available sensors
the available actuators
the available computing power
the available built-in knowledge
30. Foundations of Artificial Intelligence 30 PEAS Analysis To design a rational agent, we must specify the task environment
PEAS Analysis:
Specify Performance Measure, Environment, Actuators, Sensors
Example: Consider the task of designing an automated taxi driver
Performance measure: Safe, fast, legal, comfortable trip, maximize profits
Environment: Roads, other traffic, pedestrians, customers
Actuators: Steering wheel, accelerator, brake, signal, horn
Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard
31. Foundations of Artificial Intelligence 31 PEAS Analysis More Examples Agent: Medical diagnosis system
Performance measure: Healthy patient, minimize costs, lawsuits
Environment: Patient, hospital, staff
Actuators: Screen display (questions, tests, diagnoses, treatments, referrals)
Sensors: Keyboard (entry of symptoms, findings, patient's answers)
Agent: Part-picking robot
Performance measure: Percentage of parts in correct bins
Environment: Conveyor belt with parts, bins
Actuators: Jointed arm and hand
Sensors: Camera, joint angle sensors
32. Foundations of Artificial Intelligence 32 PEAS Analysis More Examples
Agent: Internet Shopping Agent
Performance measure??
Environment??
Actuators??
Sensors??
33. Foundations of Artificial Intelligence 33 Environment Types Fully observable (vs. partially observable):
An agent's sensors give it access to the complete state of the environment at each point in time.
Deterministic (vs. stochastic):
The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic).
Episodic (vs. sequential):
The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.
34. Foundations of Artificial Intelligence 34 Environment Types (cont.) Static (vs. dynamic):
The environment is unchanged while an agent is deliberating (the environment is semi-dynamic if the environment itself does not change with the passage of time but the agent's performance score does).
Discrete (vs. continuous):
A limited number of distinct, clearly defined percepts and actions.
Single agent (vs. multi-agent):
An agent operating by itself in an environment.
35. Foundations of Artificial Intelligence 35 Environment Types (cont.)
36. Foundations of Artificial Intelligence 36 Structure of an Intelligent Agent All agents have the same basic structure:
accept percepts from environment
generate actions
A Skeleton Agent:
Observations:
agent may or may not build percept sequence in memory (depends on domain)
performance measure is not part of the agent; it is applied externally to judge the success of the agent
37. Foundations of Artificial Intelligence 37 Looking Up the Answer? A Template for a Table-Driven Agent:
Why can't we just look up the answers?
The disadvantages of this architecture
infeasibility (excessive size)
lack of adaptiveness
How big would the table have to be?
Could the agent ever learn from its mistakes?
Where should the table come from in the first place?
38. Foundations of Artificial Intelligence 38 Agent Types Simple reflex agents
are based on condition-action rules and implemented with an appropriate production system. They are stateless devices which do not have memory of past world states.
Reflex Agents with memory (Model-Based)
have internal state which is used to keep track of past states of the world.
Agents with goals
are agents which in addition to state information have a kind of goal information which describes desirable situations. Agents of this kind take future events into consideration.
Utility-based agents
base their decision on classic axiomatic utility-theory in order to act rationally.
39. Foundations of Artificial Intelligence 39 A Simple Reflex Agent We can summarize part of the table by formulating commonly occurring patterns as condition-action rules:
Example:
if car-in-front-brakes
then initiate braking
Agent works by finding a rule whose condition matches the current situation
rule-based systems
But, this only works if the current percept is sufficient for making the correct decision
There are three phases inside the loop here: figure out how
the environment has changed, figure out what is the best action,
figure out how this action changes the environment.
The key advantage of this architecture is that the "interpret"
function identifies "equivalence classes" of percepts: many
different percepts correspond to the SAME environmental situation,
from the point of view of what the agent should DO.
Therefore the table of rules can be much smaller than the lookup table
above.
It is not rational for an agent to pay attention to EVERY aspect
of the environment.
There are three phases inside the loop here: figure out how
the environment has changed, figure out what is the best action,
figure out how this action changes the environment.
The key advantage of this architecture is that the "interpret"
function identifies "equivalence classes" of percepts: many
different percepts correspond to the SAME environmental situation,
from the point of view of what the agent should DO.
Therefore the table of rules can be much smaller than the lookup table
above.
It is not rational for an agent to pay attention to EVERY aspect
of the environment.
40. Foundations of Artificial Intelligence 40 Example: Simple Reflex Vacuum Agent
There are three phases inside the loop here: figure out how
the environment has changed, figure out what is the best action,
figure out how this action changes the environment.
The key advantage of this architecture is that the "interpret"
function identifies "equivalence classes" of percepts: many
different percepts correspond to the SAME environmental situation,
from the point of view of what the agent should DO.
Therefore the table of rules can be much smaller than the lookup table
above.
It is not rational for an agent to pay attention to EVERY aspect
of the environment.
There are three phases inside the loop here: figure out how
the environment has changed, figure out what is the best action,
figure out how this action changes the environment.
The key advantage of this architecture is that the "interpret"
function identifies "equivalence classes" of percepts: many
different percepts correspond to the SAME environmental situation,
from the point of view of what the agent should DO.
Therefore the table of rules can be much smaller than the lookup table
above.
It is not rational for an agent to pay attention to EVERY aspect
of the environment.
41. Foundations of Artificial Intelligence 41 Agents that Keep Track of the World Updating internal state requires two kinds of encoded knowledge
knowledge about how the world changes (independent of the agents actions)
knowledge about how the agents actions affect the world
But, knowledge of the internal state is not always enough
how to choose among alternative decision paths (e.g., where should the car go at an intersection)?
Requires knowledge of the goal to be achieved LEARNING IN INTELLIGENT AGENTS
With the reflex architecture, if the table of rules prescribes the
wrong action, and the agent discovers this and changes the table,
it has automatically generalized from its specific experience.
Generalization is a key phenomenon in learning. Generalization
always requires previous "background" knowledge to direct it.
All complex intelligent agents will have a lot of background knowledge
preprogrammed, because they do not have the time to receive enough
experience and feedback from the environment to allow them to learn to
behave correctly starting from scratch.
In linguistics this is called the "poverty of stimulus" argument. If
you calculate how many sentences a young child hears before it starts
to speak correct English, the number is too few to allow it to "guess"
the grammar of English. Therefore the baby must have a so-called
universal natural language grammar preprogrammed into it by its genes.
This argument is controversial, but there is scientific agreement
that background knowledge of some sort (often very hidden and implicit)
is necessary for learning in humans and AI systems.
LEARNING IN INTELLIGENT AGENTS
With the reflex architecture, if the table of rules prescribes the
wrong action, and the agent discovers this and changes the table,
it has automatically generalized from its specific experience.
Generalization is a key phenomenon in learning. Generalization
always requires previous "background" knowledge to direct it.
All complex intelligent agents will have a lot of background knowledge
preprogrammed, because they do not have the time to receive enough
experience and feedback from the environment to allow them to learn to
behave correctly starting from scratch.
In linguistics this is called the "poverty of stimulus" argument. If
you calculate how many sentences a young child hears before it starts
to speak correct English, the number is too few to allow it to "guess"
the grammar of English. Therefore the baby must have a so-called
universal natural language grammar preprogrammed into it by its genes.
This argument is controversial, but there is scientific agreement
that background knowledge of some sort (often very hidden and implicit)
is necessary for learning in humans and AI systems.
42. Foundations of Artificial Intelligence 42 Agents with Explicit Goals
43. Foundations of Artificial Intelligence 43 Agents with Explicit Goals Knowing current state is not always enough.
State allows an agent to keep track of unseen parts of the world, but the agent must update state based on knowledge of changes in the world and of effects of own actions.
Goal = description of desired situation
Examples:
Decision to change lanes depends on a goal to go somewhere (and other factors);
Decision to put an item in shopping basket depends on a shopping list, map of store, knowledge of menu
Notes:
Search (Russell Chapters 3-5) and Planning (Chapters 11-13) are concerned with finding sequences of actions to satisfy a goal.
Reflexive agent concerned with one action at a time.
Classical Planning: finding a sequence of actions that achieves a goal.
Contrast with condition-action rules: involves consideration of future "what will happen if I do ..." (fundamental difference).
44. Foundations of Artificial Intelligence 44 A Complete Utility-Based Agent
45. Foundations of Artificial Intelligence 45 Utility-Based Agents (Cont.) Preferred world state has higher utility for agent = quality of being useful
Examples
quicker, safer, more reliable ways to get where going;
price comparison shopping
bidding on items in an auction
evaluating bids in an auction
Utility function: state ==> U(state) = measure of happiness
Search (goal-based) vs. games (utilities).
46. Foundations of Artificial Intelligence 46 Shopping Agent Example Navigating: Move around store; avoid obstacles
Reflex agent: store map precompiled.
Goal-based agent: create an internal map, reason explicitly about it, use signs and adapt to changes (e.g., specials at the ends of aisles).
Gathering: Find and put into cart groceries it wants, need to induce objects from percepts.
Reflex agent: wander and grab items that look good.
Goal-based agent: shopping list.
Menu-planning: Generate shopping list, modify list if store is out of some item.
Goal-based agent: required; what happens when a needed item is not there? Achieve the goal some other way. e.g., no milk cartons: get canned milk or powdered milk.
Choosing among alternative brands
utility-based agent: trade off quality for price.
47. Foundations of Artificial Intelligence 47 General Architecture for Goal-Based Agents Simple agents do not have access to their own performance measure
In this case the designer will "hard wire" a goal for the agent, i.e. the designer will choose the goal and build it into the agent
Similarly, unintelligent agents cannot formulate their own problem
this formulation must be built-in also
The while loop above is the "execution phase" of this agent's behavior
Note that this architecture assumes that the execution phase does not require monitoring of the environment GOALS AND GOAL FORMULATION
Often the first step in problem-solving is to simplify the
performance measure that the agent is trying to maximize.
Formally, a "goal" is a set of desirable world-states.
"Goal formulation" means ignoring all other aspects of the
current state and the performance measure, and choosing a goal.
Example: if you are in Arad (Romania) and your visa will expire
tomorrow, your goal is to reach Bucharest airport.
GOALS AND GOAL FORMULATION
Often the first step in problem-solving is to simplify the
performance measure that the agent is trying to maximize.
Formally, a "goal" is a set of desirable world-states.
"Goal formulation" means ignoring all other aspects of the
current state and the performance measure, and choosing a goal.
Example: if you are in Arad (Romania) and your visa will expire
tomorrow, your goal is to reach Bucharest airport.
48. Foundations of Artificial Intelligence 48 Learning Agents
49. Foundations of Artificial Intelligence 49 Search and Knowledge Representation Goal-based and utility-based agents require representation of:
states within the environment
actions and effects (effect of an action is transition from the current state to another state)
goals
utilities
Problems can often be formulated as a search problem
to satisfy a goal, agent must find a sequence of actions (a path in the state-space graph) from the starting state to a goal state.
To do this efficiently, agents must have the ability to reason with their knowledge about the world and the problem domain
which path to follow (which action to choose from) next
how to determine if a goal state is reached OR how decide if a satisfactory state has been reached.
50. Foundations of Artificial Intelligence 50 Intelligent Agent Summary An agent perceives and acts in an environment. It has an architecture and is implemented by a program.
An ideal agent always chooses the action which maximizes its expected performance, given the percept sequence received so far.
An autonomous agent uses its own experience rather than built-in knowledge of the environment by the designer.
An agent program maps from a percept to an action and updates its internal state.
Reflex agents respond immediately to percepts.
Goal-based agents act in order to achieve their goal(s).
Utility-based agents maximize their own utility function.
51. Foundations of Artificial Intelligence 51 Exercise Do Exercise 1.3, on Page 30
You can find out about the Loebner Prize at:
http://www.loebner.net/Prizef/loebner-prize.html
Also (for discussion) look at exercise 1.2 and read the material on the Turing Test at:
http://plato.stanford.edu/entries/turing-test/
Read the article by Jennings and Wooldridge (Applications of Intelligent Agents). Compare and contrast the definitions of agents and intelligent agents as given by Russell and Norvig (in the text book) and and in the article.
52. Foundations of Artificial Intelligence 52 Exercise News Filtering Internet Agent
uses a static user profile (e.g., a set of keywords specified by the user)
on a regular basis, searches a specified news site (e.g., Reuters or AP) for news stories that match the user profile
can search through the site by following links from page to page
presents a set of links to the matching stories that have not been read before (matching based on the number of words from the profile occurring in the news story)
(1) Give a detailed PEAS description for the news filtering agent
(2) Characterize the environment type (as being observable, deterministic, episodic, static, etc).