600 likes | 959 Views
What are we testing… when we think we are testing listening?. John Field Universities of Bedfordshire and Cambridge ISLI, University of Reading 12 th November, 2013.
E N D
What are we testing…when we think we are testing listening? John Field Universities of Bedfordshire and Cambridge ISLI, University of Reading 12th November, 2013
With thanks to Cambridge English language assessment and the ISLC, University of Reading, for their support for the research cited here
A theme of the talk • Teachers and EFL materials writers tend to favour standard test formats: partly to prepare learners for international tests and partly through lack of alternatives • But the needs of teachers and the designers of international high stakes tests are clearly very different • Testers aim for: • Reliability (avoiding ‘subjective judgement’) • Ease of marking • Teachers / local testers need: • In depth information about the listening skills of individual learners so that testing can lead into instruction.
What do we think we test? • Comprehension. • So what do we mean by comprehension? • ‘Understanding’ • Understanding what? • Giving correct answers to comprehension questions
What do we think we test? 2 • ‘Listening for gist’ • ‘Listening for information’ • ‘Listening for main idea’ • ‘Local listening’ • ‘Global listening’
What do we think we test? 3 [CEFR B2 goals] • ‘Can understand standard spoken language, live or broadcast, on both familiar and unfamiliar topics normally encountered in personal, social, academic or vocational life. Only extreme background noise, inadequate discourse structure and/or idiomatic usage influence the ability to understand • Can understand the main ideas of propositionally and linguistically complex speech on both concrete and abstract topics delivered in a standard dialect, including technical discussions in his/her field of specialisation • Can follow extended speech and complex lines of argument provide the topic is familiar and the direction of the talk is signposted by explicit markers’..
OK, so… • That tells me what L2 listeners should aim for under my teaching/ testing… • But how are they supposed to get there? • These descriptors define the input or output in listening, but say nothing about the process we are trying to test. • They cannot be said to support assessment for learning.
The need for a cognitive approach • There is a new interest among testers in what goes on in the mind of the test taker. • We need to know whether high-stakes test actually test what they claim to test. Can a listening test, for example, accurately predict the ability of a test taker to study at an English medium university? • At local level, we need to use tests to diagnose learner problems so that the tests can feed into learning. This is especially true of listening.
Cognitive validity asks… • Does a test elicit from test takers the kind of process that they would use in a real-world context? In the case of listening, are we testing the kinds of process that listeners would actually use ? • Or do the recordings and formats that we use lead test takers to behave differently from the way they would in real life?
Two possible approaches • A. Ask learners to report on the processes they adopted when taking a test (e.g. by explaining how they got their answers) • B. Use a model of listening that is supported by evidence from psychology. Match the processes produced by a test against the model.
Learner report 1: Location • Item:A demand for golf courses attracted the interest of both ………… and businessmen. • Key: There was … enormous interest amongst landowners not to mention businessmen • S: I think I um + I the key words. I think most + most useful for me is the ‘businessmen’ • R: right • S: because when I heard this before + I heard I heard ‘landowners’ and ‘businessmen’ • R: so you you recognised the the word ‘landowners’ • S: oh yeah • R and [it was] close to the word ‘businessmen’ • S: yeah this is ever close so I think maybe
Conclusion • Test takers listen out for words from the (written) items in order to locate where the answer is likely to be.
Learner Report 2: order R: is there anything that you heard that helped you? S: I have the problem about that because I am concentrate on the two of the questions so …I didn’t realise R: so S: his his + he’s already go to the 9 R: right ok so you were still listening out for number 8 S: yeah and number 7 Professional Development for IATEFL 2013
Conclusions Learners recognise and exploit the convention that questions are asked in the same order as the recording. This provides them with a rough sequential outline of the recording before they even begin to listen. If a listener fails to find the answer to one question, he/she may go on to miss the answer to the next one as well.. Professional Development for IATEFL 2013
What a written set of items provides The items in (e.g.) a gap-filling task potentially provide a candidate with: • An outline of what the recording covers • A set of gapped sentences that follow the sequence of the recording • Key words with which to locate information • Sequences which may echo the wording of the recording or the order of words
Learner report 3: prominent words Correct answer: Tom suggests that golf courses could also be used as NATURE RESERVES’ S: number 13 is I’m not sure but um + he said ‘crack’ R: you heard the word ‘crack’? S: crack …but I don’t know the meaning of ‘crack’ R: er you know it seemed to be an important word S: yes I think so R: ok + how did you spell ‘crack’ if if you don’t know the S: c-r-a-c-k R: right so you guessed the spelling did you? S: I guess yes Most importantly, courses should be designed to attract rather than drive away wildlife. Professional Development for IATEFL 2013
Conclusion Learners sometimes simply listen out for prominent words – even if they do not understand them. This is partly a reflection of their level. At level B1 and below, listeners are very dependent upon picking up salient words rather than chunks or whole utterances. This tendency is increased by the use of gap filling tasks, which focus attention at word level. Professional Development for IATEFL 2013
General conclusions • a. Conventional test formats provide much more information than is available in real-world contexts (and do so in a written form) BUT… • b. Conventional test formats may also be more demanding than real-life listening because of divided attention effects, where the learner has to read and listen or read, write and listen.
Recordings Does the input impose similar listening demands to those of a real-world speaker?
Natural speech ( Recording Level B2) • To what extent do these recordings resemble authentic everyday speech?
Recording origin • Authentic • Scripted • Semi-scripted / re-recorded • Improvised ‘All tests are based on authentic situations’ Cambridge ESOL PET Handbook
Why re-recorded material? • Exam Boards prefer this type because it enables them to • ‘ Reduce noise’ • Control speech rate • Simplify vocabulary and grammar if necessary • Introduce distractors • Eliminate redundancy (or add it with single-play tasks)
Some conclusions on studio recordings • Actors adapt their delivery to fit punctuation. • They pause regularly at the ends of clauses • There are few hesitation pauses. • No overlap between speakers
Speaker variables • Accent • Speech rate: speed and consistency • Pausing • Level and placing of focal stress • Number of speakers • Pitch of voice; familiarity of voice • Precision of articulation
Normalisation and testing L2 listening • Test takers need time to adjust (normalise) to the voice of an unfamiliar speaker. Best not to focus questions on the opening 10 seconds of a longer recording. • Because of the need to normalise, it is best not to have too many speakers in a test recording. Listening difficulty increases as the number of voices increases beyond one M and one F (Brown & Yule, 1983). • Adapting to voices is cognitively demanding. Testers must bear in mind the cognitive demands of normalising to speech rate and voice pitch . Is it fair to add to those demands by featuring a variety of accents?
Tasks Does the task elicit processes which resemble those that a listener would use in a real-world listening event?
Task types in international tests • Multiple-choice • Gap filling • True/False/Don’t know • Multiple matching: Identify which of the five speakers is a lorry driver / a politician / a musician • Visual multiple choice Examination boards recognise that all of these have their drawbacks - which is why they argue for a mixture of tasks
Multiple choice questions You hear an explorer talking about a journey he’s making. How will he travel once he is across the river? A. by motor vehicle B. on horseback C. on foot (FCE Handbook, 2008: 60)
Recording 1 (FCE Sample Test 1:1) • The engine’s full of water at the moment, it’s very doubtful if any of the trucks can get across the river in this weather. The alternative is to carry all the stuff across using the old footbridge, which is perfectly possible …and thenuse horses rather than trucks for the rest of the trip all the way instead of just the last 10 or 15 kilometres as was our original intention. We can always pick up the vehicles again on the way down…
Recording 1 (FCE Sample Test 1:1) • The engine’s full of water at the moment, it’s very doubtful if any of the trucks can get across the river in this weather. The alternative is to carry all the stuff across using the old footbridge, which is perfectly possible …and thenuse horses rather than trucksfor the rest of the trip all the way instead of just the last 10 or 15 kilometres as was our original intention. We can always pick up the vehiclesagain on the way down…
Conclusion Conventional formats require the listener to: • Map from written information to spoken • Eliminate negative possibilities as well as identify positive ones (esp with MCQ and T/F) • Read and write as well as listen (espgap filling) • Engage in complex tasks which take us well beyond listening (esp. multiple matching)
The task: solutions for the teacher / local tester
Suggestions for using conventional tasks • Provide items after a first playing of the recording and before a second. This ensures more natural listening, without preconceptions or advance information other than general context. • Keep items short. Loading difficulty on to items (especially MCQ ones) just biases the test in favour of reading rather than listening. • Items should avoid echoing words in the recording • Favour tasks (e.g. multiple matching) that allow items to ignore the order of the recording and to focus on global meaning rather than local detail.
More natural tasks • Ignore the questions in the coursebook or present them orally. • Ask questions and get answers in the first language • Use whole class oral summary (What have you understood so far?), then replay the recording • At lower levels of English, ask learners to transcribe small parts of a recording • At higher levels, use note-taking and reporting back • Get learners to work in pairs and compare notes
Items: What to target in a listening test?
Five phases of listening (Field 2008) Decoding Speech signal Words Meaning Word search Parsing Meaning construction Discourse construction
Targets An item in a test can target any of these levels: • Decoding: She caught the (a) 9.15 (b) 9.50 (c) 5.15 (d) 5.50 train. • Lexical search: She went to London by ……. • Factual information: Where did she go and how? • Meaning construction: Was she keen on going by train? • Discourse construction. What was the main point made by the speaker?
Targeting levels of listening • In theory, a good test should target all levels of listening in order to provide a complete picture of the test taker’s command of all the relevant processes. • In practice, higher levels may be too demanding in the early stages of L2 listening. Novice listeners focus quite heavily on word-level decoding, which does not leave them enough spare attention to give to wider meaning. • In addition, certain test formats may tap almost exclusively into one level. Gap-filling is a good example
Higher processes (Field 2008) PROPOSITION ENRICH MEANING MEANING REPRESENTATION HANDLE INFO DISCOURSE REPRESENTATION
Implications for testing Questions may and should be asked at three levels: • Factual: local information • Meaning in context: requiring the listener to relate what the speaker says to the context or to draw conclusions which are not expressed by the speaker • Discourse: showing a global understanding of what was said (including speaker intentions etc.)
Meaning representation • The listener has to: • Relate what was said to its context • Enrich the meaning (drawing upon world knowledge) • Make inferences • Resolve reference(she, it, this, did so) • Interpretthe speaker’s intentions All of these indicate possible question types
Discourse building / handling information Is it important? Is it relevant? Choose Connect Compare Construct How is it linked to the last utterance? Is what I think I heard consistent with what was said so far? What is the overall line of argument?
Why is information handling omitted in present test design? • Choose: the tester chooses which information points to focus on – sometimes choosing points that are not central to the recording • Connect: Much testing focuses on single points, with no connection to those before and after • Compare: Tests rarely ask learners to check information (for example, comparing two accounts of an accident) • Construct. Tests rarely seek for evidence that learners can construct an outline based upon macro-and micro points / headings and subheadings
Solutions for local testers Ask questions at discourse level: • What is the main point of the recording? / Give three main points. • What is the connection between Point A and Point B? • Complete a skeleton summary of the text with main points and sub-points Ask learners to compare two recordings for similarities and differences Ask learners to summarise a recording orally or in the form of notes (in L1 or L2)
Some thoughts on teacher testing of listening and its impact on teaching
The inflexibility of high stakes tests Large scale high-stakes tests have major constraints which prevent them from testing listening in a way that fully represents the skill. • Reliability and ease of marking • Highly controlled test methods, using traditional formats that the candidate knows • Little attention possible to individual variation or alternative answers