1 / 75

Research design: The backbone of academic inquiry

CUE Forum 2008. Research design: The backbone of academic inquiry. Peter Neff – Doshisha University Matthew Apple – Nara National College of Technology David Beglar – Temple University Japan. Olympic Memorial Youth Center November 1, 2008, 1:15 - 2:50 p.m. Overview.

aria
Download Presentation

Research design: The backbone of academic inquiry

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CUE Forum 2008 Research design: The backbone of academic inquiry Peter Neff – Doshisha University Matthew Apple – Nara National College of Technology David Beglar – Temple University Japan Olympic Memorial Youth Center November 1, 2008, 1:15 - 2:50 p.m.

  2. Overview • Introduction: The importance of good research design • Approaching the study • Developing the study design Break for Q&A • Designing the right instrument • Implementing your design • Conclusion Further Q&A

  3. Introduction: The importance of good research design

  4. Poorly-designed study Hit on an idea, dive right in No background research Throw together a survey, give to a group of unwary participants Collect data, then ponder how to analyze Run to a colleague for help Fish around for most “interesting” findings Pray to get published Poor design vs. Good design

  5. Well-designed study Hit on an idea, do background research Formulate relevant, specific, practical RQs Consider participants, context, data analysis in advance Decide/develop instrument; pilot and revise it Decide on appropriate pre/post-test instruments Plan stages and structure of data collection Prepare participants adequately …Then carry out the study Poor design vs. Good design

  6. The importance of good design • A well-designed study provides many benefits: • Demonstrates researcher knowledge • Ties the study to an underlying philosophy • Provides a clear path for the researcher(s) • Helps avoid mishaps of previous studies

  7. The importance of good design • Other benefits • Leads to more concrete results, more definitive conclusions • Improves chances of publication • Raises the status of SLA as a field of inquiry

  8. A word about mixed methods designs • The great quan-qual debate • Mixed methods – the “best” of both worlds • Add a qualitative component to a quantitatively-oriented study: • Participant interviews • Observational, audiovisual data • Open-ended survey questions • Plan for qualitative analyses (text analysis, response coding)

  9. Part 1Approaching the Study

  10. Approaching the study • Hitting upon research ideas • Review of the literature • Formulating research questions

  11. Hitting upon research ideas • Identify the topic in a few words • Reflect on “doability” of research • Can I research this? • Should I research this? • Am I interested in researching this? • Review of the literature can help redefine and revise ideas

  12. Identifying the topic: Hints for starting to narrow • Pose a short question using “what” or “how” • Write a short title that consists of one sentence under 12 words • Ask a friend or colleague to read your topic and gauge their reactions • Draft research questions to see if the topic can be adequately explored

  13. A “researchable” topic • “Can I do this in my current situation?” • “Does this concern people at other institutions?” • “Does this add to the current body of research related to this topic?” • “Does this study contribute something from a unique perspective?”

  14. Filtering “probably not so good” ideas: • To boldly go where no research has gone before… (The “Star Trek” idea) • My theory is clearly better than X (The “Steven Krashen is so wrong” idea) • My classroom is totally unique (The “I don’t need theory” idea) • This is a really cool technology / methodology / text book (The “I am primarily a teacher” idea)

  15. Filtering ideas: A few hints • Review research designs and statistical techniques • Review teaching methods and overall SLA research results • Evaluate access to potential study participants • Plan time for material creation, study design, and implementation

  16. Review of the literature • Relate the study to continuing “dialogue” in current research • Finding a “gap” in the literature • Provide a framework for the importance of the study

  17. Review of the Literature:Finding a “gap” in knowledge • “We do not enough about X…” • “This way of looking at X has never been done…” • “This way of learning about X has not been duplicated in my context” • “Previous research has inadequately explored X…”

  18. Finding literature: Some hints • Google Scholar using key words or researcher names • Scour recent literature review articles • Check for “cited” numbers online • Get access to university databases • Refer to recently published articles • After the year 2000 • During the previous 2 to 3 years • Examine “outside the field” articles

  19. Finding literature: Separating the wheat… • “Top tier” journal articles • Most-often-cited articles • Recent articles • Research articles (not reviews) • Books / Edited book-articles • Major international conference papers • Dissertations / dissertation abstracts

  20. …from the chaff • “In-house” journal articles • Articles from “proceedings” books • Online journal articles with only .html versions • Unedited books from small publishers • Newspaper and magazine articles • Web pages • Anecdotal evidence • Your own previous papers for an MA course

  21. Research questions: A few useful guidelines • Naturally flow from the literature review • Strongly connected to the topic • At least two or three (not one)… • …but not five or six or more • As specific as possible • Directly concern variables in the study • Do not contain yes/no question words

  22. RQs: What not to ask • “Is X true/false?” • “Will X happen if…?” • “Does X cause Y?” • “What do participants think of X?” • “Why does X happen?”

  23. RQs: What to ask • “What differences exist between…” • “Compared to X, how does Y…?” • “To what degree do X and Y differ…?” • “When X is controlled for Y…, how does Z…?” • “What are underlying patterns among…?” • “To what degree does X predict Y?”

  24. Part 2Developing the Study Design

  25. Research design • Cross-sectional design: A design in which data are collected from a sample at only one point in time. • Longitudinal design: A design in which data are collected at more than one point in time.

  26. Randomized Control-Group Pretest-Posttest Design Experimental Group 1 T1 Xa (Method a) T2 Experimental Group 2 T1 Xb (Method b) T2 Control Group T1 T2

  27. Randomized Control-Group Pretest-Posttest Design • Reasonably strong conclusions can be reached about the effects of the treatments. • Problem 1: Within session variation (e.g., different teachers or room conditions) may intervene. • The solution? Randomly assigning participants, times, and places to the experimental and control conditions. • Problem 2: The pretest may interact with the treatment. This potential problem is dealt with in the next design.

  28. Randomized Solomon Six-Group Design Pretested (Random assignment) T1 Xa (Method a) T2 Pretested (Random assignment) T1 Xb (Method b) T2 Pretested (Random assignment) T1 T2 Unpretested (Random assignment) Xa (Method a) T2 Unpretested (Random assignment) Xb (Method b) T2 Unpretested (Random assignment) T2

  29. Randomized Solomon Six-Group Design • This design amounts to doing the experiment twice –once with and once without pretesting. • It is possible to know what effects, if any, are associated with pretesting. • If the results of the “two experiments” are consistent, greater confidence can be placed in the findings.

  30. Counterbalanced Design • This design is useful when randomization is not possible and intact groups must be used.

  31. Counterbalanced design • The counterbalanced design rotates out the participants’ differences (e.g., one group has more aptitude or motivation than the other groups) by exposing each group to all variations of the treatment. • Order-of-presentation effects are controlled. • Primary weakness: The possibility of carryover effects from one treatment to the next exists. Allowing time between treatments can alleviate this problem.

  32. Control-Group Time-Series Design Experimental Group 1 T1 T2T3T4Xa (Method a) T5 T6T7T8 Experimental Group 2 T1 T2T3T4Xb (Method b) T5 T6T7T8 Control Group T1 T2T3T4 T5 T6T7T8

  33. Control-Group Time-Series Design • This design allows the researcher to determine growth over time, and the effect of an intervention. • The presence of a control group increases the trustworthiness of the results because the possibility of a contemporary event causing any gains can be determined.

  34. Control-Group Time-Series Design • This design can be extended by exposing the participants to the intervention on multiple occasions. • This approach is more sensitive to partial gains in knowledge and tests the strength of the intervention more than once, thus giving the researcher a more accurate understanding of the effectiveness of the intervention. Experimental Group 1 T1 T2Xa T3T4Xa T5 T6 Experimental Group 2 T1 T2Xb T3T4Xb T5 T6 Control Group T1 T2 T3T4 T5 T6

  35. Q&A Break

  36. Part 3Designing the Right Instrument

  37. Instrument Design • Commonly used instruments in SLA research • Scored tests • Rater scores • Surveys • Interviews • Consider your eventual data analysis when developing instruments

  38. Instruments - Scored tests Minuses • Quantitative items • Limited to one type of data • Qualitative items • Take more time/effort to score • Rater bias Pluses • Quantitative items (M/C, Cloze/C-tests) • Simple to score large # of participants • Easier to analyze • Qualitative items (short answer, timed essays) • Good complement to quantitative scores • Can provide more in-depth assessment of participants’ abilities

  39. Instruments – Performance ratings • An assessment of participants’ performance in an assigned task • Tasks may include presentations, interviews, written essays • Performances can be scored using a Likert-scale, a rubric, or holistically • Usually scored by at least two “expert” raters; sometimes also by peers

  40. Performance ratings • Rating criteria should be concretely established with little ambiguity • Avoid including too many (or too few) criteria for one performance task • All raters should undergo a “normative” training session prior to assessment • Use models to train raters • Avoid single-score holistic ratings

  41. Instruments - Surveys • Often used for: • Collecting learner history data (L2 study experience, other background info) • Assessing participants’ attitudes towards a predetermined construct (language learning motivation, anxiety using the L2) • Determining reactions to an experimental treatment (teaching methods, innovative learning tasks)

  42. Survey making • For non-advanced learners – surveys should be in their L1 • Build in redundancy - Include multiple questions for each concept area • Questions should be simply worded – avoid negative or confusing wording • Depending on the purpose of the survey, 20-40 items/session is a good range to shoot for

  43. Survey making • Any survey used in a serious study should be piloted in advance • It is acceptable to make adjustments to an existing instrument • Likert-scale items should usually have between 4 and 6 choices • A few qualitative questions can provide a nice complement to quantitative instruments

  44. Instruments - Interviews • Interviews can provide an excellent qualitative component to a larger study • It is not necessary to interview all participants • a subsample as small as 10-20% can be acceptable • Use your best judgment on participants’ language ability • For intermediate-and-above learners, L2 interviews are often fine

  45. Conducting interviews • Inform students they are being interviewed, obtain consent • Record unobtrusively • “Warm up” the participants before getting into the heart of the interview • Collect more data than you need

  46. Validating Instruments

  47. Instrument Validity • The construct = The heart of the matter • What construct do you wish to measure? • How do you define the construct? • What are its component parts? Do they form a unified whole?

  48. Operationalizing the construct: The items • Conceptualize the construct as a continuum: easy—difficult items and less able—more able persons. • How have other researchers measured the construct? • Write original or adapted items. • Cover the estimated range of your participants. • Write 50% more items than you intend to use. This will allow you to “cherry pick” the best items as well as items at various levels of difficulty.

  49. Operationalizing the Construct: The Items More able | More difficult persons | items | x | item 1 xx | item 2 item 3 xxx | item 4 item 5 xxxx | item 6 item 7 item 8 xxxx | item 9 item 10 item 11 xxx | item 12 item 13 xx | item 14 x | item 15 | Less able | Less difficult Persons | items

  50. Operationalizing the Construct: The Items • After piloting the items, statistically analyze the results. • Examine dimensionality, item difficulty, and item content. • Select the best items to make an efficient, highly reliable instrument.

More Related