220 likes | 237 Views
A Customized Speaking Rubric: Design & Implementation. Devrim Uygan & Eylem Mengi School of Languages/Sabanci University Istanbul/Turkey. Who are we?. Sabanci University English Preparatory Program at SL 650 student intake 4 Routes (A1, A2, B1, B2) Speaking 10% (Assessed twice at 5%)
E N D
A Customized SpeakingRubric: Design&Implementation Devrim Uygan & Eylem MengiSchool of Languages/Sabanci University Istanbul/Turkey
Who are we? • Sabanci University • English Preparatory Program at SL • 650 student intake • 4 Routes (A1, A2, B1, B2) • Speaking 10% (Assessed twice at 5%) • A1 & A2 (Interlocuters) • B1 & B2 (Group Discussion)
One of the products of a large scale research project by Spoken English Research Group:
OUTLINE • The Birth: (What we did before creating the new criteria) • First Baby Steps: How we actually made the criteria • First interactions:Our Key Principles • Baby going to Kindergarten: Getting Feedback • Gaining independence:Training teachers, and Standardization • Flying the nest- soon!
We identified/analyzed the needs of: • The FDY students • The FDY Instructors • The faculty members
We did literature review and defined/identified: • Key concepts • Exit level descriptors • Speaking objectives • Main areas of development (Idea Dev., Interaction, Fluency&Coherence, Use of Language) & Subskills for each area • Key principles for designing tasks and rubrics
Why a customized rubric? ! A productive skill- the speaking exam has the elements of progress, achievement and proficiency tests ( Hard to find a speaking rubric that caters for local needs) ! The assessment task types determine the specifications within the descriptors ! How much emphasis is given to speaking within the course/ TLP determine the expected outcomes and therefore the performances deployed in the descriptors.
The problems we had • The descriptors in each band were confusing a. Words were too general (e.g speaks clearly (A) //has a reasonable range of language to give clear descriptions (B)) b. Lacked parallellism between bands. c. 0-9 bands (full of lengthy descriptions with vague language)
2. Examiners could not differentiate between bands 3. Students could fit into different bands for different areas and scoring scheme did not allow this. 4. Areas overlap (e.g. Range is a part of all areas): Therefore, language could receive 75% of overall grade.
Our KEY Principles(How we formulated our rubrics) • Chose columns (Areas) that matched with our objectives a. Idea development//Task performance b. Interaction// Task performance C.Fluency and coherence d. Use of Language 2. Generic but local/route specific 3. Callibrated with CEFR descriptors (but also specific to our local condition)
3. Holistic but analyticalenough 4. Reader-friendlylanguage a. ParallelLanguage b. Avoidednegativestatements (Focus on whatthestudent CAN do) c. Focus on performance (not speakingability)
5. User-Friendly a. Easy to score b. No need to give additional feedback c. Score can be converted to a grade over 100. d. NO need to take detailed notes of student performances.
Developing the rubric together Collecting feedback, Revising Training, Standardization