210 likes | 352 Views
Assessing Academic Writing with a Pragmatic Email Task. Randall Rebman Northern Arizona University. Literature Review. Pragmatics are part of most notable models of communicative competence ( Bachman & Palmer, 2010; Canale, 1983; Canale & Swain, 1980 )
E N D
Assessing Academic Writing with a Pragmatic Email Task Randall Rebman Northern Arizona University
Literature Review • Pragmatics are part of most notable models of communicative competence (Bachman & Palmer, 2010; Canale, 1983; Canale & Swain, 1980) • The testing of second language competence is an underexplored area in second language assessment (Roever, 2011)
Target Domain • Sample of writing tasks outlined by Grabe & Kaplan’s (1996) taxonomy • Notes and memoranda • Lecture notes, reports (expository) • Letters including speech acts like refusals, requests and recommendations • Recounts & Narratives • Argumentative Essays
Rationale For Task Choice of Tasks Building on the Writing Framework for the TOEFLdeveloped by Cumming et al., (2000) this test domain is made up of three tasks: • Independent invention task • Interdependent tasks • Interdependent situation-based task
Purpose of Test Development • To place L2 students in different levels of writing ability. • To decide if students matriculate into the university from the intensive English program. • To include a representation of writing tasks that second language writers will be required to produce in university contexts (Bridgeman & Carlson, 1983).
Why an Email Task? • In our own classes we see students struggle with using the proper conventions of email for communication. • There is a potential for positive washback (Crusan, 2010)in that teachers may begin to prioritize teaching the register and genre features of emails. • Some writing assessment researchers have argued for more situation-based writing tasks (Cumming et al., 2000), but such tasks need prototyped for local contexts (Weigle, 2002). • Adding an email task to a writing test can expand the range of the construct of academic writing that is assessed and provide an additional writing sample.
Limited Test Domain Integrated Task: Summary of a chart Independent Task: Prompt-based Argumentative essay Situational-based Task: Request to a Professor
Research Questions for Test Trialing of Email Task #1 • Can the same rater produce consistent ratings of an email writing task using a new rubric? • Is the email response task testing academic writing ability in a different way than the integrated and independent writing tasks?
Research Questions for Email Task Test Trialing #2 • Can different raters produce consistent ratings of an email writing task using a new rubric? • Is the email response task testing academic writing ability in a different way than the integrated and independent writing tasks?
Test Trialing #1 & #2 Participants • Trial #1: n= 174 • Trial #2: n= 103 • International students mainly representing China, Saudi Arabia, Japan, Korea, and Kuwait • Ages ranged from 18-24 • All were pre-university students required to take the English placement test to determine level placement in the Program of Intensive English or advancement to the university
Methods • A holistic scale for prototype email task was created using the empirical method (Weigle, 2002) • Quantification: a 6-point rubric operationalized the construct of writing ability on the email task which resulted in a score of 0-5 given by a single rater
Methods • A Spearman-Brown correlation coefficient is used to measure intra-rater reliability and inter-rater reliability for RQ1. • A Spearman-Brown correlation coefficient is used to measure the internal consistency between the email task, integrated task and independent task RQ 2. • Spearman-Brown was chosen over Pearson Coefficient because the data is not truly continuous (Hatch & Lazerton, 1990).
Methods Email task scale scoring criteria • Language use • Grammatical and lexical features • Register awareness, including appropriate forms of address • Genre markers specific to emails • Topical relevance • Task completion
Task Characteristics • Task Prompt Directions (3 minutes): Read the question below. Plan, write, and revise an email. Use the space below to prepare writing your email. You may begin now. Question: You are new at XXX University. You do not know what classes to take. Write an email to Professor Smith to do the following: 1) introduce yourself 2) explain your problem 3) ask for advice
Results of Test Trial#1 Descriptive Statistics Note. CI = confidence interval; LL = lower limit; UL = upper limit Correlation Coefficient for Intra-rater reliability Note. df = 172; alpha .05; rho critical = .364; N = Number of pairs;
Results of Test Trial#1 Descriptive Statistics Across Writing Tasks Note. CI = confidence interval; LL = lower limit; UL = upper limit Correlations Between Writing Tasks Note. df = 172; alpha .05; rho critical = .364.
Results of Test Trial#2 Descriptive Statistics Note. CI = confidence interval; LL = lower limit; UL = upper limit Correlation Coefficient for Inter-rater reliability Note. df = 172; alpha .05; rho critical = .364; N = Number of pairs;
Results of Test Trial#2 Note. CI = confidence interval; LL = lower limit; UL = upper limit Correlations Between Writing Tasks Note. df = 172; alpha .05; rho critical = .364; N = Number of pairs;
Discussion • Students had higher mean scores on the email task than on the other two task types for test trail#2 • What does this mean for implementing the new task? • The dispersion of test scores did not distinguish students by writing ability. • The task appears to be too simple or the scale made it too easy to get a high score. • There is also the possibility that the different test takers in test trial #2 were more familiar with the conventions of an email than those in test trial #1
Implications • The email task could be improved by adding complexity to the task design. • This could be done by giving more input for learners to respond to, such as a sample email from a professor to which they must respond to • The scale must be revised to better distinguish criteria expected for different bands of rubric • A sample of emails to faculty members could be gathered to identify pragmatic features lacking in current task design • Future research needs to determine if the responses to an email task produces different textual features.
References Bachman, L. F., & Palmer, A. (2010). Language assessment in practice. Oxford: Oxford University Press. Bridgeman, B., & Carlson, S. (1983). Survey of academic writing tasks required of graduate and undergraduate students. TOEFL Research Report 15. Princeton, NJ: Educational Testing Service. Canale, M. (1983). From communicative competence to communicative language pedagogy. In J. Richards & R. Schmidt (Eds.), Language and communication (pp. 2–27). London: Longman. Canale, M., & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics, 1, 1–47. Crusan, D. (2010). Assessment in the Second Language Writing Classroom. Ann Arbor: University of Michigan Press. Cumming, A., Kantor, R., Powers, D., Santos, T., & Taylor, C. (2000). TOEFL 2000 writing framework: A working paper. (TOEFL Monograph Series Report No. 18). Princeton, NJ: Educational Testing Service. Grabe, W., & Kaplan, R. (1996). Theory and practice of writing. New York: Longman. Hatch, E., & Lazaraton, A. (1990) .The research manual: Design and statistics for applied linguistics. Boston: Heinle & Heinle Publishers. Weigle, S. C. (2002). Assessing Writing. Cambridge: Cambridge University Press Roever, C. (2011). Testing of second language pragmatics: past and future. Language Testing, 28(4) , 463–481.