270 likes | 528 Views
Chapter 6: The language of thought hypothesis. Ways of developing PSSH. • The 4 claims are compatible with different ways of thinking about physical symbol systems and how the system manipulates them • diagrammatic symbol structures (e.g. WHISPER)
E N D
Ways of developing PSSH • The 4 claims are compatible with different ways of thinking about physical symbol systems and how the system manipulates them • diagrammatic symbol structures (e.g. WHISPER) • language-like symbol structures (e.g. language of thought theory)
Motivations for LOTH • Philosophy – explaining how causation by content is possible • Cognitive science – required for a computational approach to practical reasoning perception concept learning
3 basic claims • Psychological states are realized by physical states • applies to both personal-level states (e.g. beliefs) and subpersonal states (e.g. states of the early visual system • Psychological states represent the world • Psychological states enter into causal relations • with other psychological states and ultimately with behavior
The challenge of causation by content • The challenge of explaining how psychological states can enter into causal relations in virtue of their content (how they represent the world) • Causal interactions are interactions between physical objects (e.g. populations of neurons) • Content properties are not physical properties • Danger of content properties being epiphenomenal (soprano example)
Content and vehicle 1 • Philosophers typically distinguish between • the content of a belief (how the world is believed to be) • the vehicle of the belief (the physical object that realizes the belief in the CNS) • Analogy between the meaning of a sentence and the the spoken sounds/written inscription
Content and vehicle 2 • This distinction between content and vehicle applies to the full range of representations in the CNS personal levelrepresentations beliefs, desires and other propositional attitudes subpersonalrepresentations computational states of modules, individual neurons etc.
Content and structure • Mental representations have contents that can be either true or false • Truth-evaluable contents can be expressed by declarative sentences • Declarative sentences report possible states of affairs • The content is true just if the possible state of affairs reported is actual
States of affairs • States of affairs have structure – e.g. • An object having a particular property (e.g. St Louis county has a population of approximately 1M) • Two object standing in a relation (e.g. St Louis is east of Kansas City)
Theories of content Two ways of thinking about the content of mental representations Coarse-grained: Contents correspond to states of affairs Fine-grained: Contents correspond to ways of thinking about states of affairs Either way, contents are typically viewed as having structure
Structure and the LOT • The LOT hypothesis is a hypothesis about the vehicles of mental representations • The vehicles of propositional attitudes have a structure that is isomorphic to the structure of their contents • The vehicles are isomorphic to the structure of the sentences that express those contents • This structure at the level of the vehicle is what explains the possibility of causation by content
Three claims Causation through content is ultimately determined by causal interactions between physical structures These physical structures have sentence-like structure, which determines how they are built up and how they interact with each other Causal relations betweens sentences in the language of thought respect logical/rational relations between the contents of those sentences
Logic and mental causation • Causation by content exploits rational/logical connections between contents Content of desire: I will not lose money Content of belief: If I buy shares then I will lose money Content of intention: I will not buy shares My belief and desire cause my intention in virtue of logical relations between the relevant contents
Problem of causation by content • How do causal interactions between the physical vehicles of mental representations preserve/exploit logical relations between the contents of those mental representations? • LOT answer = – the LOT is like a formal language – this allows the LOT to exploit the relation between syntax and semantics that we find in a formal language
Formal languages Examples Propositional calculus Predicate calculus Theories that have predicate calculus as underlying logic Theory of arithmetic Theory of Turing machines coded into the theory of arithmetic
Syntax Syntax of a formal language • alphabet of basic symbols of various types e.g. predicates, names • rules for combining basic symbols into complex symbols according to their type e.g. rules governing wffs • rules for manipulating those complex symbols e.g. rules of inference Rules identify symbols in terms of their formal (typographic) features
Semantics Semantics provides an interpretation for the symbols of the formal language – objects are assigned to names – sets of objects are assigned to 1-place relations – sets of n-tuples of objects are assigned to n- place relations
Two types of logical relation Logical deducibility Sentence p is deducible from sentences iff there is a sequence of legitimate symbol manipulations that lead from some subset of to p Logical consequence Sentence p is a consequence of sentences iff there is no way of interpreting the symbols in and p that makes all of true and p false
Completeness and soundness Two basic results about (first-order) logics (and hence about theories that can be expressed in first-order logics) Soundness: If p is derivable from then p is a consequence of Completeness: If p is a consequence of then p is derivable from
Implications for LOT • We can think about logical relations between contents of mental representations in terms of logical consequence • We can think about causal relations between the vehicles of mental representations in terms of logical deducibility (physical transformations that implement syntactic rules) • Soundness and completeness ensure that consequence and deducibility always go together
Cognitive science arguments Basic strategy CogSci treats information-processing as a form of computation • we need the LOT as a medium for computational information-processing Applications • practical decision-making • concept acquisition • language-learning
LOT and language-learning Language learning is essentially a process of hypothesis formation and testing we need the LOT as a medium for formulating and modifying the hypotheses The hypotheses are truth-rules - e.g. “a is F” is true iff b is G (where a = b and ‘F’ and ‘G’ refer to the same set of objects) Means that the LOT must be at least as expressively powerful as the language being learnt
Problems • How plausible is it to treat language-learning as a process of translation? • How do we learn the leaning of ‘red’? • ‘a is red’ is true iff a is red* or • starting with paradigm cases of red objects and then learning what other objects are relevantly similar
Unwelcome implications? • Fodor argues that most natural language words have atomistic meanings • failure of dictionary definitions/analysis of necessary and sufficient conditions • This means that there have to be words in the LOT corresponding to almost all words in, say, English • This huge LOT vocabulary has to be innate