1.08k likes | 1.36k Views
How (not) to Explain Concepts. Steven Horst Wesleyan University www.wesleyan.edu/~shorst. Preliminaries. An early version of a paper, some parts perhaps not quite brought to term Use of slides, information in multiple modalities. How (not) to Explain Concepts Overview.
E N D
How (not) to Explain Concepts Steven Horst Wesleyan University www.wesleyan.edu/~shorst
Preliminaries • An early version of a paper, some parts perhaps not quite brought to term • Use of slides, information in multiple modalities
How (not) to Explain ConceptsOverview • How not to explain the semantics of concepts -- two familiar approaches that bark up the wrong tree • The lineaments of a new account • Continuity with animal cognition • The Discrimination Engine • Realization through neural nets • Modularity and incremental gains • Philosophical Payoffs
Part I Two Familiar Problems with Accounts of Concepts
Problem 1:The “Logical” Approach • Conceptual semantics can be handled in the same way as the semantics of predicates • A “semantic theory” is a mapping from expressions in a language onto their extensions (e.g., D. Lewis on “Languages and Language”) • Tarskian version • Direct assignment of primitive denotation • Recursive rules for expressions
An Example -- Fodor’s Causal Covariation Account • Basic idea: the semantic value of a “symbol in mentalese” is its (characteristic) cause • More formally: there is an asymmetric causal covariation relation between cows and symbols that mean ‘cow’, and this explains why ‘cow’-symbols mean ‘cow’.
Problems • At best, an explanation of meaning-assignments, not of meaningfulness • Account (putatively) distinguishes things that mean cow from those that mean horse • Does not distinguish things that are meaningful from those that are meaningless--causal covariation is pandemic • E.g., there is a causal covariation relation between cows and cowpies, but cowpies do not mean ‘cow’. Such a theory only explains meaning-assignments once meaning is already in the picture to begin with!
Generalization • More generally, a mapping is not enough to explain semantics (I.e., semanticity) • Specifies, but does not explain, meaning-assignments (cf. Simon Blackburn, Hartry Field) • Mapping alone is weaker than meaning • Mappings are cheap -- indefinitely many possible “interpretations” of a language.
Why is “Formal Semantics” Attractive? • 20th century attention to philosophy of language, semantics, largely stems from interests of logicians • Special interests of logicians • Truth • truth-preserving inference • Completeness • Consistency
Leads to odd features of the “languages” logicans talk about • Only sentences with truth values are talked about (cf. Austen 1962) • Desire for/assumption of bivalence • Fuzzy predicates problematic for extensional approach • Tarskian definition not possible for languages with indexicals, reference to expressions in the object language. • Linguistic change, idiolectic variation only accommodated by changes/differences in entire language
Historical Extremes • Some Positivists called for “reform” of natural language • Quine -- don’t know what I mean by ‘rabbit’ • Davidson: we each speak our own language (Then what is English? How is communication possible?) • Amazing to linguists that these issues are largely ignored by philosophers
Limits of Logical Approach • Logical approach not good for talking about non-assertoric utterances (nor uses of concepts in things other than judgements) • Many features of actual languages and concepts seem problematic • Fuzziness/vagueness (predicates & concepts) • Indeterminacy (predicates & concepts) • Non-alethetic felicity conditions (utterances/thoughts) • Context-dependence (Edidin) • Failure of bivalence, sorites paradoxes (Statements/judgments) • Cartwright on scientific laws
Analysis of Problem 1 • Prevailing approach to semantics in analytic philosophy has been guided by the interests of logicians: • As if we were asking: • (not: “What is language like?”) • “What would language have to be like if it were to accommodate certain virtues pertaining to truth and inference?”
Analysis of Problem 1 • Prevailing approach to semantics in analytic philosophy has been guided by the interests of logicians • OK so far as it goes • Other possible theoretical perspectives • Pragmatics/sociolinguistics • Ethology/animal behavior • Psychology • Evolution • Dynamic systems, cybernetics
Suggestion • Set aside logically-inspired approach • Try other approaches • See if things that were problematic become more transparent • …will try to implement this different approach in second half of talk
Second Problem--Too Much or Too Little • Two basic kinds of approaches to concepts • Rich Views: Those that look at concepts within rich context of human mind -- hold that concepts are inseparable from other features of human mentation • Consciousness, natural language, reasoning • Searle, Brandom, Blackburn, Wittgenstein • Reductive Views: Those that stress continuities with animal cognition, computation or some other kind of system, reduce concepts to something else
Rich Views--Claims and Appeal • Claims: • Cannot have concepts without other things in human mentation (e.g., consciousness, inference, natural language) • Intuitive appeal: • Not clear that one would call something a concept if we knew it lacked these other things • Not clear how to individuate concepts semantically without these other things (e.g., could it mean ‘bachelor’ if one didn’t infer ‘male’ or ‘unmarried’?)
Rich views--Problems • Problems • Obvious continuities between human and animal cognition call for explanation • Biological • Behavioral • Things in animals seem concept-like • Tracking kinds and variable properties • My cat seems to be able to tell dogs from mice, animate mice from inanimate • Re-identification • My cat can identify some individuals (e.g. me) • Behavior cued to kind- and property-differences
Reductive Views • Take some set of features of information-processing or animal cognition and treat these as an analysis of concepts in us. E.g.,: • Concepts are “just” discriminative abilities • Thoughts are “just” symbolic representations • Concepts are “just” symbol types in a language of thought • Languages are just functions from terms to their extensions
Reductive Views--Problems • Not clear that our concepts would be what they are without inferential relations, self-reference, consciousness, language • (fine-grained) semantic individuation tied to inferential commitments • Role of division of linguistic labor • Doesn’t seem right to say human concepts are just animal cognition “plus” an add-on: the phylogenetically older elements are transformed by being taken up into a new kind of system.
A Dilemma • Neither rich nor reductive views seem wholly satisfactory • Seems to present a choice between the idea that lower-level theories explain everything (reduction) or nothing • Can one find a middle way which • Stresses continuities with animal precursors of human thought • Gives some explanatory insight • Compatible with constitutive role inferential, linguistic, phenomenological features seem to play in human conceptuality?
A Way Out • Explanation of concepts in terms of features continuous with animal cognition is not a philosophical analysis (in terms of necessary and sufficient conditions) • It is scientific explanation involving idealization
Idealization • Take a rich phenomenon (say, moving bodies -- dynamics) • Idealize away from some set of factors that do in fact matter in vivo (e.g., mechanical interactions involved in wind resistance, impetus) • …to arrive at a more accurate understanding of what is left over (e.g., gravity)
Actual (noisy) trajectory of projectile Electro-magnetism Wind (mechanical force)
Actual (noisy) trajectory of projectile Electro-magnetism Idealization Wind (mechanical force) Galileo’s parabolic trajectory of projectiles
Idealizations • Do not • aspire to tell the whole story about a system • Necessarily describe how things actually behave • Provide adequate basis for predictions • May not be computable (3-body problem) • May not be factorable (feedback systems) • Chaotic systems • Are not properly understood as universally quantified claims about actual behavior of objects and events
Idealizations • Do • Provide true stories (pace Cartwright) about real invariants in nature
Application to concepts... • Leave the word ‘concept’ for the rich things that go on in us. • Investigate the continuities under the name proto-concepts (reached by idealization away from consciousness, etc.) • Leave open the question of whether the kind ‘concept’ is • Protoconceptuality plus add-ons • Determined essentially by relations to other things like consciousness and reasoning.
Inference Language Concepts Concepts have rich web of relations in us Consciousness
Inference Language Concepts Concepts have rich web of relations in us Consciousness Idealize away from Language Inference Consciousness
Inference Language Concepts Concepts have rich web of relations in us Consciousness Idealize away from Language Inference Consciousness Proto-Concepts
Part II Lineaments of a Non-Reductive Account of (Proto)Concepts (I.e., concepts in us, seen under the idealizing move, and their precursors in the animal kingdom)
Stage 1: Discrimination • Basic suggestion: protoconcepts are first and foremost things employed in the enterprise of the discrimination of environmentally salient conditions within the life of a homeostatic system (organism). • Requires some system within organism capable of some set of states that covary with salient states of affairs -- SCHEMAS • These states must be exploitable in control of behavior • More than a purely informational relation--tracks salient affordances (only very sophisticated animals can track “properties” in any general way!)
Causal Covariation A S1 B System A S2 B System
Internal Regulators A S1 B “Action” centers Discrimination • Takes place only in a homeostasis engine • DISCRIMINATOR must respond to salient states of affairs • Must have further connections in a feedback loop driving behavior on the basis of discrimination • Note non-reductive definition -- something is a discriminator by dint of its role in a more complex system
Simple Example -- The Fly • “Roughly speaking, the fly’s visual apparatus controls its flight through a collection of about five independent, rigidly inflexible, very fast responding systems (the time from visual stimulus to change of torque is only 21 ms). For example, one of these systems is the landing system; if the visual field “explodes” fast enough (because a surface looms nearby), the fly automatically “lands” toward its center. If this center is above the fly, the fly automatically inverts to land upside down. When the feet touch, power to the wings is cut off.” • [Reichardt and Poggio, 1976, 1979; Poggio and Reichardt 1976, reported in Marr 1982, pages 32-33]
DiscriminatorCircuit Motor Control Circuit Excitatory Connection Inhibitory Connection
Fly -- No Real Representations • “it is extremely unlikely that the fly has any explicit representations of the visual world around him—no true conception of a surface, for example, but just a few triggers and some specifically fly-centered parameters.” (Marr, p. 34) • What might this mean?
2 Kinds of Schemas • Object-oriented schemas • Contain elements that covary with • Objects • Properties of objects • Interface-oriented schemas • Elements covary with relations at boundaries between organism and environment, not articulated into components that represent the relata.
Surface-approaching The state of affairs to which the discriminator is attuned is a fly-relevant affordance
“Representations” • A technical and stipulative definition • ‘representation’ =df an element in a schema whose function is to covary with objects or properties of objects. • Note that none of the ordinary associations of ‘representation’ are intended to be operative -- syntax, language
Flies have no representations • Flies have no representations, but only interface-oriented schemas. • Perception, cognition and action do not seem to be distinguished in the fly: the motor control mechanisms are directly driven by perceptual stimuli, without any apparant intervening level at which cognition takes place. • The fly’s brain contains a distinction device, but what it distinguishes are fly-relevant ecological conditions that are not factored out into states of affairs involving objects and properties.
Fly “semantics” • Either the fly has no semantics at all, or else there is no distinction between semantics and pragmatics for flies:
Brace for impact, Laddie! The activation of the fly’s “landing system” might be equally well (or badly) described by us as a REPORT “there is a surface coming up,” or as a WARNING “Brace for impact, laddie!” There is a surface apporaching
Differences in Higher Animals (1):Types of Proto-Concept • Seems to involve inner models that have elements that track objects (bird, updrafts, worm) • Seems to track kinds of things • In some species, ability to model states of objects (dead/alive, in heat/not, etc.) • In social animals, ability to re-identify particular individuals of the same kind • Recombinability of these elements grounds generativity, productivity of thought
Track objects Track Kinds Track States Track individuals Definite descriptions Common nouns Verbs and adjectives Proper nouns Note parallels between grammatical classes and representational abilities of animals • But note that these kinds of representational abilities seem to be present in nonlinguistic animals -- productivity does not require language or syntax
Differences in Higher Animals (2)Learning • Lots of ways (architectures) to implement discrimination circuits • Different (harder) problem of learning -- requires a discrimination ENGINE • Mere curcuit-planning not enough • Rule-based systems have proved bad at learning • Accomplished in terrestrial animals through particular kinds of nervous systems
Neural Networks and Neural Modeling in Cognitive Psych. • Attempts to model psychology based on architectural features of the brain • Often models only coarse-grained features • Distributed processing • Massively parallel connections • Hebbian learning
Neural Networks and Neural Modeling in Cognitive Psych. • Features of cognition “fall out” of the model • Learning discrimination of salient (I.e., reinforced-for) features comes naturally • Plasticity of learning new discriminations • Adjustment of existing discriminations • Loosening/tightening vigilance • Fuzziness of predicates
Neural Networks and Protoconcepts: Some Claims • Protoconcepts are elements within a discrimination engine • In terrestrial animals capable of conditioning, this engine is realized through a neural net architecture. • Some features of animal cognition to be understood in terms of task of discrimination • Others are artifacts of the realizing system