1 / 56

What are we testing… when we think we are testing listening?

What are we testing… when we think we are testing listening?. John Field Universities of Bedfordshire and Cambridge ISLI, University of Reading 12 th November, 2013.

drago
Download Presentation

What are we testing… when we think we are testing listening?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What are we testing…when we think we are testing listening? John Field Universities of Bedfordshire and Cambridge ISLI, University of Reading 12th November, 2013

  2. With thanks to Cambridge English language assessment and the ISLC, University of Reading, for their support for the research cited here

  3. A theme of the talk • Teachers and EFL materials writers tend to favour standard test formats: partly to prepare learners for international tests and partly through lack of alternatives • But the needs of teachers and the designers of international high stakes tests are clearly very different • Testers aim for: • Reliability (avoiding ‘subjective judgement’) • Ease of marking • Teachers / local testers need: • In depth information about the listening skills of individual learners so that testing can lead into instruction.

  4. What do we think we test? • Comprehension. • So what do we mean by comprehension? • ‘Understanding’ • Understanding what? • Giving correct answers to comprehension questions

  5. What do we think we test? 2 • ‘Listening for gist’ • ‘Listening for information’ • ‘Listening for main idea’ • ‘Local listening’ • ‘Global listening’

  6. What do we think we test? 3 [CEFR B2 goals] • ‘Can understand standard spoken language, live or broadcast, on both familiar and unfamiliar topics normally encountered in personal, social, academic or vocational life. Only extreme background noise, inadequate discourse structure and/or idiomatic usage influence the ability to understand • Can understand the main ideas of propositionally and linguistically complex speech on both concrete and abstract topics delivered in a standard dialect, including technical discussions in his/her field of specialisation • Can follow extended speech and complex lines of argument provide the topic is familiar and the direction of the talk is signposted by explicit markers’..

  7. OK, so… • That tells me what L2 listeners should aim for under my teaching/ testing… • But how are they supposed to get there? • These descriptors define the input or output in listening, but say nothing about the process we are trying to test. • They cannot be said to support assessment for learning.

  8. A cognitive account

  9. The need for a cognitive approach • There is a new interest among testers in what goes on in the mind of the test taker. • We need to know whether high-stakes test actually test what they claim to test. Can a listening test, for example, accurately predict the ability of a test taker to study at an English medium university? • At local level, we need to use tests to diagnose learner problems so that the tests can feed into learning. This is especially true of listening.

  10. Cognitive validity asks… • Does a test elicit from test takers the kind of process that they would use in a real-world context? In the case of listening, are we testing the kinds of process that listeners would actually use ? • Or do the recordings and formats that we use lead test takers to behave differently from the way they would in real life?

  11. Two possible approaches • A. Ask learners to report on the processes they adopted when taking a test (e.g. by explaining how they got their answers) • B. Use a model of listening that is supported by evidence from psychology. Match the processes produced by a test against the model.

  12. Listening to learners

  13. Learner report 1: Location • Item:A demand for golf courses attracted the interest of both ………… and businessmen. • Key: There was … enormous interest amongst landowners not to mention businessmen • S: I think I um + I the key words. I think most + most useful for me is the ‘businessmen’ • R: right • S: because when I heard this before + I heard I heard ‘landowners’ and ‘businessmen’ • R: so you you recognised the the word ‘landowners’ • S: oh yeah • R and [it was] close to the word ‘businessmen’ • S: yeah this is ever close so I think maybe

  14. Conclusion • Test takers listen out for words from the (written) items in order to locate where the answer is likely to be.

  15. Learner Report 2: order R: is there anything that you heard that helped you? S: I have the problem about that because I am concentrate on the two of the questions so …I didn’t realise R: so S: his his + he’s already go to the 9 R: right ok so you were still listening out for number 8 S: yeah and number 7 Professional Development for IATEFL 2013

  16. Conclusions Learners recognise and exploit the convention that questions are asked in the same order as the recording. This provides them with a rough sequential outline of the recording before they even begin to listen. If a listener fails to find the answer to one question, he/she may go on to miss the answer to the next one as well.. Professional Development for IATEFL 2013

  17. What a written set of items provides The items in (e.g.) a gap-filling task potentially provide a candidate with: • An outline of what the recording covers • A set of gapped sentences that follow the sequence of the recording • Key words with which to locate information • Sequences which may echo the wording of the recording or the order of words

  18. Learner report 3: prominent words Correct answer: Tom suggests that golf courses could also be used as NATURE RESERVES’ S: number 13 is I’m not sure but um + he said ‘crack’ R: you heard the word ‘crack’? S: crack …but I don’t know the meaning of ‘crack’ R: er you know it seemed to be an important word S: yes I think so R: ok + how did you spell ‘crack’ if if you don’t know the S: c-r-a-c-k R: right so you guessed the spelling did you? S: I guess yes Most importantly, courses should be designed to attract rather than drive away wildlife. Professional Development for IATEFL 2013

  19. Conclusion Learners sometimes simply listen out for prominent words – even if they do not understand them. This is partly a reflection of their level. At level B1 and below, listeners are very dependent upon picking up salient words rather than chunks or whole utterances. This tendency is increased by the use of gap filling tasks, which focus attention at word level. Professional Development for IATEFL 2013

  20. General conclusions • a. Conventional test formats provide much more information than is available in real-world contexts (and do so in a written form) BUT… • b. Conventional test formats may also be more demanding than real-life listening because of divided attention effects, where the learner has to read and listen or read, write and listen.

  21. Recordings Does the input impose similar listening demands to those of a real-world speaker?

  22. Natural speech ( Recording Level B2) • To what extent do these recordings resemble authentic everyday speech?

  23. Recording origin • Authentic • Scripted • Semi-scripted / re-recorded • Improvised ‘All tests are based on authentic situations’ Cambridge ESOL PET Handbook

  24. Why re-recorded material? • Exam Boards prefer this type because it enables them to • ‘ Reduce noise’ • Control speech rate • Simplify vocabulary and grammar if necessary • Introduce distractors • Eliminate redundancy (or add it with single-play tasks)

  25. Some conclusions on studio recordings • Actors adapt their delivery to fit punctuation. • They pause regularly at the ends of clauses • There are few hesitation pauses. • No overlap between speakers

  26. Speaker variables • Accent • Speech rate: speed and consistency • Pausing • Level and placing of focal stress • Number of speakers • Pitch of voice; familiarity of voice • Precision of articulation

  27. Normalisation and testing L2 listening • Test takers need time to adjust (normalise) to the voice of an unfamiliar speaker. Best not to focus questions on the opening 10 seconds of a longer recording. • Because of the need to normalise, it is best not to have too many speakers in a test recording. Listening difficulty increases as the number of voices increases beyond one M and one F (Brown & Yule, 1983). • Adapting to voices is cognitively demanding. Testers must bear in mind the cognitive demands of normalising to speech rate and voice pitch . Is it fair to add to those demands by featuring a variety of accents?

  28. Tasks Does the task elicit processes which resemble those that a listener would use in a real-world listening event?

  29. Task types in international tests • Multiple-choice • Gap filling • True/False/Don’t know • Multiple matching: Identify which of the five speakers is a lorry driver / a politician / a musician • Visual multiple choice Examination boards recognise that all of these have their drawbacks - which is why they argue for a mixture of tasks

  30. Multiple choice questions You hear an explorer talking about a journey he’s making. How will he travel once he is across the river? A. by motor vehicle B. on horseback C. on foot (FCE Handbook, 2008: 60)

  31. Recording 1 (FCE Sample Test 1:1) • The engine’s full of water at the moment, it’s very doubtful if any of the trucks can get across the river in this weather. The alternative is to carry all the stuff across using the old footbridge, which is perfectly possible …and thenuse horses rather than trucks for the rest of the trip all the way instead of just the last 10 or 15 kilometres as was our original intention. We can always pick up the vehicles again on the way down…

  32. Recording 1 (FCE Sample Test 1:1) • The engine’s full of water at the moment, it’s very doubtful if any of the trucks can get across the river in this weather. The alternative is to carry all the stuff across using the old footbridge, which is perfectly possible …and thenuse horses rather than trucksfor the rest of the trip all the way instead of just the last 10 or 15 kilometres as was our original intention. We can always pick up the vehiclesagain on the way down…

  33. Conclusion Conventional formats require the listener to: • Map from written information to spoken • Eliminate negative possibilities as well as identify positive ones (esp with MCQ and T/F) • Read and write as well as listen (espgap filling) • Engage in complex tasks which take us well beyond listening (esp. multiple matching)

  34. The task: solutions for the teacher / local tester

  35. Suggestions for using conventional tasks • Provide items after a first playing of the recording and before a second. This ensures more natural listening, without preconceptions or advance information other than general context. • Keep items short. Loading difficulty on to items (especially MCQ ones) just biases the test in favour of reading rather than listening. • Items should avoid echoing words in the recording • Favour tasks (e.g. multiple matching) that allow items to ignore the order of the recording and to focus on global meaning rather than local detail.

  36. More natural tasks • Ignore the questions in the coursebook or present them orally. • Ask questions and get answers in the first language • Use whole class oral summary (What have you understood so far?), then replay the recording • At lower levels of English, ask learners to transcribe small parts of a recording • At higher levels, use note-taking and reporting back • Get learners to work in pairs and compare notes

  37. Items: What to target in a listening test?

  38. Five phases of listening (Field 2008) Decoding Speech signal Words Meaning Word search Parsing Meaning construction Discourse construction

  39. Targets An item in a test can target any of these levels: • Decoding: She caught the (a) 9.15 (b) 9.50 (c) 5.15 (d) 5.50 train. • Lexical search: She went to London by ……. • Factual information: Where did she go and how? • Meaning construction: Was she keen on going by train? • Discourse construction. What was the main point made by the speaker?

  40. Targeting levels of listening • In theory, a good test should target all levels of listening in order to provide a complete picture of the test taker’s command of all the relevant processes. • In practice, higher levels may be too demanding in the early stages of L2 listening. Novice listeners focus quite heavily on word-level decoding, which does not leave them enough spare attention to give to wider meaning. • In addition, certain test formats may tap almost exclusively into one level. Gap-filling is a good example

  41. Higher-level listening

  42. Higher processes (Field 2008) PROPOSITION ENRICH MEANING MEANING REPRESENTATION HANDLE INFO DISCOURSE REPRESENTATION

  43. Implications for testing Questions may and should be asked at three levels: • Factual: local information • Meaning in context: requiring the listener to relate what the speaker says to the context or to draw conclusions which are not expressed by the speaker • Discourse: showing a global understanding of what was said (including speaker intentions etc.)

  44. Meaning representation • The listener has to: • Relate what was said to its context • Enrich the meaning (drawing upon world knowledge) • Make inferences • Resolve reference(she, it, this, did so) • Interpretthe speaker’s intentions All of these indicate possible question types

  45. Discourse building / handling information Is it important? Is it relevant? Choose Connect Compare Construct How is it linked to the last utterance? Is what I think I heard consistent with what was said so far? What is the overall line of argument?

  46. Spread of targets

  47. Why is information handling omitted in present test design? • Choose: the tester chooses which information points to focus on – sometimes choosing points that are not central to the recording • Connect: Much testing focuses on single points, with no connection to those before and after • Compare: Tests rarely ask learners to check information (for example, comparing two accounts of an accident) • Construct. Tests rarely seek for evidence that learners can construct an outline based upon macro-and micro points / headings and subheadings

  48. Solutions for local testers Ask questions at discourse level: • What is the main point of the recording? / Give three main points. • What is the connection between Point A and Point B? • Complete a skeleton summary of the text with main points and sub-points Ask learners to compare two recordings for similarities and differences Ask learners to summarise a recording orally or in the form of notes (in L1 or L2)

  49. Some thoughts on teacher testing of listening and its impact on teaching

  50. The inflexibility of high stakes tests Large scale high-stakes tests have major constraints which prevent them from testing listening in a way that fully represents the skill. • Reliability and ease of marking • Highly controlled test methods, using traditional formats that the candidate knows • Little attention possible to individual variation or alternative answers

More Related