1 / 11

Steps and Validity in Experimental Research: A Comprehensive Guide

This chapter provides a step-by-step approach to conducting experimental research, including problem identification, hypothesis formulation, experimental design, data analysis, and report preparation. It also explores internal and external validity, threats to validity, and different types of research designs. The chapter concludes with a discussion on common sources of error and ways to control them.

Download Presentation

Steps and Validity in Experimental Research: A Comprehensive Guide

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 8 Experimental Research Conducting & Reading Research Baumgartner et al

  2. Steps in Experimental Research • Stating the research problem • Determining if experimental approach is appropriate • Specifying the independent variable(s) • Specifying all potential dependent variables • Stating the tentative hypotheses • Determining availability of measures for potential dependent variables • Pausing to consider success potential of the research • Identifying full potential of intervening variables • Making a formal statement of the research hypotheses • Designing the experiment • Making final estimate of success potential of the study • Conducting the study as planned in steps 1 through 11 • Analyzing the data • Preparing a research report Conducting & Reading Research Baumgartner et al

  3. Internal and External Validity • validity: How valid are the findings within the study? • validity: How much can the findings be inferred or generalized to other populations, settings, or treatments? • Must have good internal validity to have good external validity, but doesn’t guarantee good external validity. Conducting & Reading Research Baumgartner et al

  4. Threats to Internal Validity • History • Maturation • Testing • Instrumentation • Statistical Regression • Selection • Experimental Mortality • Interaction of Selection and Maturation or History Conducting & Reading Research Baumgartner et al

  5. Threats to External Validity • Interaction Effect of Testing • Interaction Effects of Selection Bias and Experimental Treatment • Reactive Effects of Experimental Setting • Multiple-Treatment Interference Conducting & Reading Research Baumgartner et al

  6. Types of Designs • designs: variety of ways a research study may be structured or conducted Three main types of designs • experimental designs • experimental designs • designs Conducting & Reading Research Baumgartner et al

  7. True experimental design • “ ” • random sampling of participants, random assignment to groups, all threats to internal validity controlled • Not always possible Conducting & Reading Research Baumgartner et al

  8. Quasi-experimental design • “ ” • Lack either random sampling or random assignment • Controls threats to validity of history, maturation, testing, instrumentation, selection, and experimental mortality Conducting & Reading Research Baumgartner et al

  9. Preexperimental designs • “ ” • no random sampling of participants, limited groups, control few threats to validity Conducting & Reading Research Baumgartner et al

  10. Methods of Control • physical manipulation: researcher controls all aspects of participants’ environment and experience • selective manipulation • statistical techniques Conducting & Reading Research Baumgartner et al

  11. Common Sources of Error • Hawthorne Effect: participants in an experiment may perform differently because they know they’re in a study • Placebo Effect: participants receiving treatment believe the treatment will have an effect (use single-blind approach) • “John Henry” Effect: Control group might try harder in attempt to outperform experimental group • Rating Effects: halo effect, overrater or underrater error, central tendency error • Experimental Bias Effect: try to control using double-blind approach • Participant-Researcher Interaction Effect: Gender issues, age, etc. • Post Hoc Error: Assuming cause-and-effect relationship where one does not exist Conducting & Reading Research Baumgartner et al

More Related