450 likes | 1.62k Views
Participant Observation. A method of doing field research, or ethnography or participant observation—qualitative research Socialized into the social setting, i.e., going where the action is and simply listening, watching & jotting down notes
E N D
Participant Observation • A method of doing field research, or ethnography or participant observation—qualitative research • Socialized into the social setting, i.e., going where the action is and simply listening, watching & jotting down notes • Researcher participates in a role in the field—makes observer comments—subjective view • Field observations are collected, i.e., field notes—objective view
Interview Schedule • An interview is a piece of social interaction with one person asking another a number of questions & the other person giving answers • i.e., qualitative interview is essentially a conversation, e.g., face-to-face interview, focus group, telephone interviews, etc • Types: structured (standardized) and semi-structured • A structured interview schedule is similar to a paper-and-pencil questionnaire—i.e., can be converted into a questionnaire—vice versa
Content Analysis • Is the study of recorded human communications • Examples: newspapers, magazines, web pages, poems, books, songs, paintings, speeches, letters, e-mail messages, laws, constitutions, etc • Any technique—involves making inferences by systematically & objectively identifying special characteristics of messages, i.e., manifest & latent • Manifest, i.e., visible & surface content of communication—intended meaning • Latent, i.e., underlying meaning—unintended—require corroboration
Summary • “Content analysis can be fruitfully employed to examine virtually any type of communication,” (Abrahamson, 1983, p.286). • As a consequence, it can focus on either qualitative or quantitative aspects of communication messages
RELIABILITY: • Is the degree to which a test consistently measures whatever it measures • Kirk and Miller (1986),three types: • (i) Quixotic, i.e., single method of observation continually yields unvarying measurement—one observer told to say the same thing--trivial—FBI stories, etc • (ii) Diachronic, i.e., stability of observation over time—weakness: nothing is fixed—things change • (iii) Synchronic: similarity of observations within same time period—most important
solution to problem of reliability: • Carefully reporting methodology used in gathering data • Double-coding as means of checking reliability--(Miles and Huberman,1994) • i.e., two or more researchers coding same field data (inter coder reliability) or • one researcher coding segment of data at two different periods (intra coder reliability)
Calculation of Reliability • Reliability= number of agreements divide by total number of agreements + disagreements • Most desirable range = 90% • Reliability is much easier to assess than validity
VALIDITY: • Is the degree to which a test measures what it is supposed to measure • i.e., to confirm how plausible the data collected— • Kenneth Pike (1969) coined Emic and Etic concepts to explain validity in qualitative research • Emic: studying behavior from inside the system, i.e., local concepts, e.g., family, culture, etc • Etic: studying behavior from outside the system, i.e., pan-cultural concepts, e.g. circumcision of males
Modifying imposed etic to achieve valid emic perspective • Generating emic content of etic construct, i.e., took etic construct & interpreted the emic content, e.g., polygamy, etc., (R. W. Brislin, 1976) • Researcher can use triangulation, i.e., multiple methods of data collection: • Open-ended techniques and • Participant observation
Reliability vs. Validity in Quantitative Research: • Similar to qualitative because all deal with measurement
RELIABILITY: • Means consistency or dependability • Example: a weight-scale—one gets on it & read 150 as the weight— • if one repeats it & gets the same weight each time then the scale is reliable • Focuses also on measurement, or instrumentation— • addressed in a variety of ways: test-retest; equivalent-forms; & split-half
Test-Retest: • Is the degree to which scores are consistent over time • Example: relationship between SAT scores 2005 & 2006, • i.e., administering SAT test to the same group of high school seniors at different times— • yielding same scores--consistently
Equivalent-Forms • Administering two different forms of the same test, e.g., SAT test, to the same group, at the same time • Most acceptable estimate of reliability • Therefore, most commonly used in research
Split-Half • Items on the instrument are divided into comparable halves • E.g., a scale divided so that the first half has the same score as the second • Looks at internal consistency • Weakness: difficulty to ensure that the two halves are equivalent
VALIDITY: • Measuring what you think you are measuring
Content (Face) validity: • Is the degree to which a test measures an intended content area, e.g., achievement tests • Example: to measure knowledge of parenting skills could be obtained by consulting experts such as social workers, parents • Judgment is dependent upon the knowledge of the experts
Criterion validity: • Describes the extent to which a correlation exists between the measuring instrument & another standard—empirical evidence • E.g., the relationship between college board examination and student academic success in college • Two measures need to be taken: the measure of the test itself & the criterion to which the test is related • E.g., a program to help pregnant teenagers succeed in high school and a criterion such as SAT scores as a comparison
Construct validity: • Is the degree to which a test measures an intended hypothetical construct • i.e., a non-observable trait, such as intelligence, which explains behavior • Involves testing hypothesis—deductive • Most difficult to establish
Difference between reliability and validity • Reliability: the degree to which a measurement procedure produces similar outcomes when it is repeated. • E.g., gender, birthplace, mother’s name—should be the same always— • Validity: tests for determining whether a measure is measuring the concept that the researcher thinks is being measured, • i.e., “Am I measuring what I think I am measuring”?
Note: • a valid test is always reliable but a reliable test is not necessarily valid • e.g., measure concepts--positivism instead measuring nouns—invalid • Reliability is much easier to assess than validity.