1 / 18

Experimental Design

Nathan Mazzei and Amanda Pacheco. Experimental Design. Description- From the experts…. “the researcher creates two (or more) ‘populations’ by using different treatments with two samples drawn randomly from a parent population” ( Gorard , 2001 ) . Continued….

erwin
Download Presentation

Experimental Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Nathan Mazzei and Amanda Pacheco Experimental Design

  2. Description- From the experts… • “the researcher creates two (or more) ‘populations’ by using different treatments with two samples drawn randomly from a parent population” (Gorard, 2001)

  3. Continued… • “Preconceived plan: ‘those ideas, issues, principles, and techniques peculiar to those investigations in which the control of natural processes is actually attempted and directly observed’ note that the design is preconceived, that it conceived before actually doing the research” (Wiersma, 1976)

  4. McMillan & Wergin (2010) argue that experimental research is referred to as the "gold standard" among researchers across disciples (p. 60). • They state that the U.S. Department of Education set this standard in 2003. • Gersten, Baker, & Lloyd (2000) argue that the amount of experimental research is at a 30 year low in the field of special education research. They further stress the need for more research on interventions using experimental research methods.

  5. Gersten, Baker, & Lloyd (2000) argue that the amount of experimental research is at a 30 year low in the field of special education research. • They further stress the need for more research on interventions using experimental research methods.

  6. In experimental design, researchers test a treatment on two randomized groups of participants. • The researchers conduct the research to prove or disprove a hypothesis. • One group receives the treatment while the other group does not. The results reflect the treatment’s effect.

  7. Educational example • Research on the effectiveness of a particular reading intervention. • Researchers would randomly select and divide students into two groups. • They would give one group the reading intervention, and the other group continued curriculum. • They would assess students before and after interventions, and compare the two groups to determine the effectiveness of the intervention.

  8. Non-educational example • Research on the effectiveness of a particular drug. • Researchers would randomly select and divide participants into two groups. • They would give one group the drug, and the other group a placebo. • They would evaluate the health of the participants before and after the trial, and compare the two groups to determine the effectiveness of the drug.

  9. Key Terms

  10. Dependent Variable • Definition- “the variable being affected or assumed to be affected by the independent variable” (Wiersma, 1986, p. 452) • Example- scores on pre/post test or health evaluation

  11. Independent Variable • Definition- “A variable that affects (or is assumed to affect) the dependent variable under study and is included in the research design so that its affect can be determined” (Wiersma, 1986, p. 453). • Examples- the reading intervention or the drug.

  12. Reliability • Definition- the experiment is reliable if all the criteria in the research experiment is done thoughtfully and with fidelity (Wiersma, 1986; Gorard, 2001; Crewwell, 2003) • If the experiment is replicated, will researchers yield the same results? (Crewell, 2003)

  13. Validity • Definition- “The extent to which a measurement instrument measures what it is suppose to measure” (Wiersma, 1986). • Are the pre/post test scores a reflection of the intervention and not an outside variable?

  14. Generalizability • Definition- “making a claim that the results are true for other individuals, interventions, measures, and contexts” (McMillan & Wergin, 2010) • Can the results apply outside of the study?

  15. Random Sampling • Definition- "a probability sample in that every population member has a nonzero probability of selection. In a random sample, this probability is the same for all population members" (Wiersma, 1986, p. 264)

  16. Criteria • Clear purpose of study • Linked previous research • Clear hypothesis/research question • Random sample • Clear independent and dependent variables • Clear data collection (McMillian & Wergin, 2010)

  17. Criteria • Generalizability • The experimental treatment directly correlates with the outcome. • Identified bias • Includes limitations • The study can be repeated with similar results (McMillian & Wergin, 2010)

  18. If all the criteria mentioned above is met thoughtfully, then the experiment can be considered reliable and valid. • The study is considered valuable research towards the field.

More Related