1 / 17

The Basics of Experimentation

The Basics of Experimentation. Ch7 – Reliability and Validity. Reliability. Reliability – consistency and dependability. Should yield similar results across experiments. Interrater reliability – different observers score the same behavior.

gloria
Download Presentation

The Basics of Experimentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Basics of Experimentation Ch7 – Reliability and Validity

  2. Reliability • Reliability – consistency and dependability. Should yield similar results across experiments. • Interrater reliability– different observers score the same behavior. • Test-retest reliability– measure behavior twice with the same test. • Interitem reliability– different parts of a test measuring the same variable are consistent.

  3. Reliability • Interitem reliability – internal consistency e.g., in a multiple item questionnaire that measures a single construct variable, the internal consistency is evaluated among the set of items using statistical tests. • Split-half reliability – split the test into two halves and compute the correlation (coefficient of reliability) between halves. • Cronbach’sα– correlation of each test item with every other item

  4. Validity • Validity – did you measure what you intended to measure? • Face validity– procedure is self-evident; works with nonconstruct variables that can be directly manipulated and measured. (e.g., measuring “pupil size” with a ruler). Least stringent type of validity. • Content validity– does the content of the measure fairly reflect the content of the variable we are trying to measure.

  5. Evaluating Operational Definitions • Content validity • Does the content of our measure reflect the content of the qualities of the variable we want to measure. • Content validity also means that a test does not measure other qualities

  6. Evaluating Operational Definitions • Predictive validity – do measures of a dependent variable predict actual behavior or performance? E.g. a questionnaire may measure a person’s desire to affiliate but will they actually seek out others when given the opportunity.

  7. Evaluating Operational Definitions • Concurrent validity – is like predictive validity in that it compares scores on a measure with an outside criterion, but it is comparative rather than predictive; you compare scores with another known standard for the variable being studied.

  8. Elevated Plus Maze Used as a measure of anxiety in rodents. Anxious animals spend more time in the enclosed arms and less time on the open arms.

  9. Zero Maze

  10. Open Field Test Thigmotaxis (time near the wall of an open field) is another measure of anxiety. If concurrent validity is high then the scores on the plus maze should be highly correlated with thigmotaxis scores in the open field.

  11. Evaluating Operational Definitions • Construct validity– deals with the transition from theory to research application. Start with a general idea of the qualities of the construct, then convert the idea into an empirical test. • Have I successfully created a measure of the construct of interest? • Example, rats have a natural fear of predation associated with increased anxiety in open areas - design test that measures the tendency to avoid open areas.

  12. Evaluating Operational Definitions • Construct validity – tests of construct validity are statistical and theoretical. Does the data make sense in the context of the overall theoretical framework? • Intelligence – maze bright versus maze dull rats.

  13. Internal validity The degree to which a causal relationship can be established between the antecedent condition and behavior. • Three concepts tied to the problem of internal validity: • Extraneous variables • Confounding • Threats to internal validity

  14. Extraneous variables factors that are not the main focus of the experiment. Other variables besides the IV and DV may change throughout the experiment: • Differences among subjects • Time of day of testing • Order of testing • Inconsistent treatment • Experimenter’s fatigue • Equipment failures

  15. Confounding when the value of an extraneous variable changes systematically across different conditions of the experiment. • Changes we see in the DV can be explained equally well by the IV or the extraneous variable.

  16. This is the Threatdown!

  17. Threats to internal validity • History (one group tested together) • Maturation (boredom or fatigue) • Testing (previous administration of a test) • Instrumentation (feature of the measuring instrument changes) • Statistical regression (subjects assigned to conditions based on extreme scores) • Selection (no random assignment) • Subject mortality (subjects drop out) • Selection interactions (if subjects were not randomly assigned to groups a selection threat could interact with any of the others threats that may have affected one group but not others)

More Related