1.14k likes | 1.33k Views
Seeing Patterns and Learning to Do Things and what that has to do with language. David Tuggy SIL-Mexico. Is “the language faculty” a black box?. Is language something totally different from the rest of what we do in our minds?
E N D
Seeing Patternsand Learning to Do Thingsand what that has to do with language David TuggySIL-Mexico
Is “the language faculty” a black box? • Is language something totally different from the rest of what we do in our minds? • If not, how are they connected? What does what we do mentally in general tell us about language? • (And what does language tell us about our mental capacities and activities generally?)
Cognitive Grammar (CG) claims that much that we find in language dovetails with what we know about other aspects of cognition. • Language is amazing, but it is not totally different from or unrelated to the rest of our mental activities. • Non-linguistic cognition is pretty amazing too.
Outline • I plan to divide this talk into two sections: I. We have amazing abilities to • Acquire complex and flexible habits (learn to do things) • Compare and categorize experiences (see patterns and apply them in novel ways) II. Understanding these abilities can clarify our understanding of what language is and how it functions. In particular • We should be careful not to simplify by setting learning and the application of patterns against each other as if they were mutually exclusive.
We are good at learning to do things • Think about what is involved in driving a car. • One way to assess it is to consider what it would take to teach a robot to do the same. • There are a host of more-basic skills that must be mastered, that are recruited into the skill of driving.
We are good at learning to do things • For instance (on the perception side of things): • Binocular visual perception: triangulation and depth perception. • Perception of 3-dimensional space and assessment of your position in it. • Calculation of your, and your car’s, motion, rate of motion, direction of motion, etc. • Calculation of other vehicles’, and pedestrians’, etc. motion, rate, direction, etc.
We are good at learning to do things • (still on the perception side of things): • Perception of where the parts of your body are with respect to each other and to the immediate surroundings (like the car seat, gear shift, steering wheel.) • Hearing car sounds, horns, and road noise, and evaluation of their significance. • Seeing and recognizing details like turn signals and brake lights. • Knowing where your mirrors are, and how to interpret what you see in them.
We are good at learning to do things • More on the motor side of things: • Turning your head and eyes for optimum seeing. • Knowing how to move other body parts. • Knowing how to move (without watching) to the controls of the car and then move the controls. • Assessing what your motions will do. • Assessing and controlling how hard, fast and far the motions will/should carry. • Adjusting all of the above to the perceptions mentioned before, in real time.
We are good at learning to do things • These (and other) skills are combined and coordinated in various sophisticated, highly flexible ways, into such higher-order skills as: • Starting and accelerating • Steering to the rightor to the left • Staying on the road and in your lane • Shifting gears • Slowing and stopping,not running into cars ahead • Changing lanes, passing other vehicles • Obeying traffic signals and signs.
We are good at learning to do things • These in turn are likely to form part of such ordinary activities as: • Going to work. • Running an errand. • Visiting your parents. • The whole package is so complex that it takes considerable time to learn to do it well • We continue to upgrade and relearn these skills even after we have mastered them.
We are good at learning to do things • They become so ordinary to us that we can do them on “autopilot”, as it were, hardly paying any attention to what we are doing, much less taking in the full complexity of it all. • We adapt them with exquisite precision to new situations. • The consequences of doing them poorly are likely to be lethal. • Yet we regularly and almost unthinkingly trust ourselves and thousands of others to do them right (or at least well enough).
We are good at learning to do things • Besides the perceptual and motor-related skills are more “autonomous” ones; • E.g. evaluation of other cars’ motions and inferences about their drivers’ intentions, reading signs, judgment of the passage of time, calculation of odds, calculations about what speed to take a corner or a speed bump at, and comparison of the result with the anticipated situation, etc. etc.
We are good at learning to do things • Language is a set of skills of this sort. • It involves coordinating hugely complex muscular, perceptual, and “autonomous” cognitive skills.
We are good at learning to do things • Many levels of such skills are recruited as parts of other, higher-level skills. • We can say “I wouldn’t have believed that she would have said anything of the sort” and understand it in context while hardly paying any attention to it, certainly without consciously realizing its enormous complexity.
We are good at learning to do things • CG recognizes this, defining a language as a “structured inventory of conventionalized linguistic units”. • A “unit” is a skill we have mastered, a cognitive routine we can run through without having to put “constructive effort” into it. • There are hierarchies upon hierarchies of such skills involved in our use of language.
What is it that we learn to do? • An important point is that though we learn these skills (linguistic or otherwise) from our experiences, they cannot be equated with particular actual experiences. • Not every neuron that fired will fire again in exactly the same way the next time we implement the skill (e.g. of perceiving a car braking on the road ahead, or of saying “she wouldn’t’ve said it”).
What is it that we learn to do? • Rather these are patternsof activation. • They permit a certain amount of “slop” or leeway. • This “slop” or leeway is extremely important. • It is what permits us to recognize a new situation as one of a kind we’ve seen before. • It also permits us to act in a new way that is nevertheless one of a kind we have done before.
Extracting patterns • CG talks about this in terms of all the “units” being “schematic” to one degree or another. • Think “Schema” = “Pattern”. • There are higher-level (more abstract) and lower-level (more specific) patterns, and patterns of many kinds • It is “patterns all the way down”, as far as language is concerned.
Extracting patterns • Schemas arise as experiences are compared and commonalities noted. • A schema embodies the commonalities of its subcases. • Consider the (already schematic yet still rather specific) concept of a pencil.
Extracting patterns • As this concept is compared to the similar concept of a ballpoint pen, there are notable similarities.
Extracting patterns • These similarities together constitute a schema (pattern) we can call ‘writing instrument’.
Extracting patterns • This kind of relationship is traditionally represented in CG by an arrow from schema to subcase: A B means “A is schematic for B; B is a subcase of A.”
Extracting patterns • This relationship is by nature asymmetrical. • Every specification of the schema (pattern) holds true of the subcases; • Not vice versa.
Extracting patterns • There is an interesting sense in which either the subcase(s) or the pattern can be seen as “basic” to the other.
Extracting patterns • (1) The schema is extracted from, and comes into being because of, the subcases. In this sense the system is built “bottom-up” • (2) Once it is estab-lished (learned), the schema legitimizes (sanctions) its subcases in “top-down” fashion.
Applying patterns productively • Particularly, a well-established schema can sanction novel structures. • This includes “partial sanction”, where the “subcase” contradicts some of the schema’s specifications.
Extracting and applying patterns • This is the way linguistic rules work under CG. • Rules are simply schemas. Applying a rule is letting the rule sanction a more specific subcase. • If the subcase is a new one, the rule is applied productively. • Like any other linguistic structures, rules are part of the language to the extent that they are learned conventionally (thus known and known to be known by all in the relevant group.) • Once learned, they can sanction novel structures.
Learning and using patterns • E.g. a kid may learn the words sugary and salty,and by comparing them, extract a schema FOOD-y. • FOOD-y is a nascent rule, and the child may use it to invent new words like vinegary or orangey.
Learning & Patterns • From all of this it should be clear that learning things (establishing units) and extracting schemas (making generalizations) and applying them are not mutually-exclusive activities.
Learning & Patterns • Everything we have learned (e.g. all the established structures in the diagram below) are generalizations (schemas, patterns).
Learning & Patterns • The schemas aren’t much good to us until we have learned them (mastered them as units). • Once we have done so, we can use them to come up with new subcases, which may in turn be learnt.
Learning & Patterns • Different people can learn slightly different units, as long as their system is close enough to somebody else’s that they can talk. • Vinegary or orangey may be learned, but if not, they are still understandable because they are sanctioned by the schema (rule) FOOD-y.
Learning & Patterns • Knowing (having mastered) a schema and knowing (having mastered) a subcase are not mutually-exclusive propositions. • To the contrary, knowing the subcases helps extract the schema, and knowing the schema reinforces the subcases.
The traditional contrast between Regularity and Irregularity • (Shifting gears —downshifting??—) • In most linguistics of the last 100 years, the contrast between what is regular and what is irregular is given enormous importance. • (Regular = according to rule, i.e. it fits a schema)
The traditional contrast between Regularity and Irregularity • It is often considered important to maximize the regular and minimize the irregular in our models of language (so as to be “scientific”). • The problem is it has been assumed that only irregular things are learned.
The traditional contrast between Regularity and Irregularity • It is assumed that: Regular = systematic = predictable = produced by rule. • Irregular = idiosyncratic = arbitrary = learned • There is assumed to be a dichotomy between these two categories.
The traditional contrast between Regularity and Irregularity • This difference is typically made into part of the architecture of linguistics. The regular/predictable is the province of grammar, the irregular is the province of the lexicon.
The traditional contrast between Regularity and Irregularity • The system assumes nice neat “modules”. • It is therefore considered important to establish if a particular kind of structure is to be accounted for “in the grammar” or “in the lexicon.”
The traditional contrast between Regularity and Irregularity • Structures are taken to be of fundamentally different sorts, and are processed in very different ways, if they are “in the grammar”, than if they are “in the lexicon”.
The traditional contrast between Regularity and Irregularity • So, if a word like sugary, or a phrase like over the top, could be produced by rule, the presumption is that in fact it is produced by rule.
The traditional contrast between Regularity and Irregularity • The schema is real, the subcases are epiphenomenal. • In effect, if you first learned the specific structure, as soon as you learn how to produce it by rule, you forget it and remember only the rule.
The traditional contrast between Regularity and Irregularity • All members of the category alike are produced by the rule rather than learned. • This is justified because it makes the model simpler and more predictive. (Science is all about prediction, right?)
The traditional contrast between Regularity and Irregularity • Now this was so obviously wrong for many words that the model was modified: morphology (word-formation) was distinguished from syntax (“real” grammar), • because (oversimplifying) so often many examples of a morphological rule had clearly been learned.
The traditional contrast between Regularity and Irregularity • As a result, morphological structures and rules were taken to be different in kind from syntactic structures and rules; they were taken care of in a different “module”.
The CG view • For CG, the dimensions of the predictability distinction are gradual, and though they tend to line up, they are not exactly parallel.
The CG view • The distinction between what is produced by rule and what is learned is of especial interest to CG. • It is the only one of these four that is directly cognitive (dealing with how the system processes the structure).
The CG view • It is closely tied to the two abilities we have been discussing. • Producing something by rule is using a schema to sanction it, especially if it itself is not (yet) learnt. • Learning is (of course) learning, routinizing a skill, making a sequence of cognitive activations into a unit, then recalling that unit, as needed, from cognitive storage (memory).
The computer analogy • A standard (and largely useful) to talk about these issues is on the analogy of a computer. • Learned information is analogous to what is stored on the hard drive, and information produced by rule is analogous to information produced by a program and not stored.
The computer analogy • This makes it less than immediately obvious that the distinction is one of degree. • What degree is there between information that is on the hard disk and information that is not? • In a sense, none.
The computer analogy • But that’s like saying there is an absolute, binary, modular difference between the word giraffe written here on the screen and giraffe here, or in a book. • It’s true in a sense, but for most purposes it’s much more important to see that it’s the same word (pattern = schema) either place.