1 / 26

URBP 204A QUANTITATIVE METHODS I Survey Research II

URBP 204A QUANTITATIVE METHODS I Survey Research II. Gregory Newmark San Jose State University (This lecture is based on Chapters 4 & 6 of Earl Babbie’s The Practice of Social Research, 10 th Edition . All uncredited cartoons are from CAUSEweb.org by J.B. Landers.). Research Purposes.

kolya
Download Presentation

URBP 204A QUANTITATIVE METHODS I Survey Research II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. URBP 204A QUANTITATIVE METHODS ISurvey Research II Gregory Newmark San Jose State University (This lecture is based on Chapters 4 & 6 of Earl Babbie’s The Practice of Social Research, 10th Edition. All uncredited cartoons are from CAUSEweb.org by J.B. Landers.)

  2. Research Purposes • Exploration • Description • Explanation

  3. Research Purposes • Exploration – to ‘get to know’ a new topic • Examples • “What is transit oriented development?” • “What is equity planning?” • Purposes • To satisfy researcher’s curiosity • To test the feasibility of more extensive study • To develop methods to be employed in future study • Benefits and Drawbacks • Breaks new ground and yields new insights • Seldom provides satisfactory answers

  4. Research Purposes • Description – to describe observations • Examples • “What are the characteristics of TOD in California?” • “How does this equity planning program work?” • Purposes • To describe situations and events • To chronicle activities • Benefits and Drawbacks • Answers questions of what, where, when, and how • Does not answer why

  5. Research Purposes • Explanation – to explain phenomena • Examples • “Why are TODs not more successful in California?” • “Why does this equity planning program work so well?” • Purposes • To explain things • To understand why something occurred • Benefits and Drawbacks • Causality

  6. Explanation How do you know if your explanation is credible?

  7. Nomothetic Causality • Real Criteria • Correlation • A relationship exists between variables • “Housing values are negatively related to commute length” • Time Order • Cause precedes effect • “Gender impacts political opinions” • Non-Spurious (Non Coincidental) • Effect can not be explained by a third variable • “Height is not explained by Zodiac sign”

  8. Nomothetic Causality • Why is correlation not enough for causality here?

  9. Nomothetic Causality • Does this cartoon pass the causality criteria?

  10. Nomothetic Causality • False Criteria • Complete Causation • Nomothetic explanations are usually incomplete • “Three common factors for driving a Prius are . . .” • Exceptional Cases • Nomothetic explanations are not voided by exceptions • “I am an environmentalist, but I don’t drive a Prius.” • Majority of Cases • Nomothetic explanations still valid even if effect only occurs in the minority of cases • “Most environmentalists don’t drive a Prius.”

  11. Nomothetic Causality • Necessary Causes • Represents a condition that must be present for the effect to follow • “You need to be female to get pregnant” • “You need to go to college to get a college degree” • Sufficient Causes • Represents a condition that guarantees effect will follow • “Skipping the final exam will result in automatic failure” • “Participating in our survey will get you a t-shirt”

  12. Nomothetic Causality • Unit of Analysis • What or whom are being studied • Individuals, groups, organizations, social artifacts, etc. • Typically, but not always, the unit of observation • For example, a study of couples might observe both partners separately • Data from individual units can be aggregated • “32 percent of our sample is college educated” • “This neighborhood predominantly does not vote”

  13. Nomothetic Causality • Faulty Reasoning regarding Units of Analysis • Ecological Fallacy • Drawing conclusions about individual based on observation of groups • “Younger neighborhoods support recycling, therefore younger people are more likely to support recycling” • Reductionism • Explaining complex phenomena with narrow concepts • Oversimplification • “What is the single cause of the American Revolution?” • “Economists see everything in economic terms.”

  14. Nomothetic Causality Ecological Fallacy

  15. Nomothetic Causality Reductionism

  16. Time Dimension of Research • Cross-Sectional Studies • Observations are all made at one point in time • E.g. our FWBT survey • Longitudinal Studies • Observations are extended over time • Trend Studies track a characteristic • Cohort Studies track a subpopulation • Panel Studies track the same people (the panel) • Problem of panel attrition (dropping out of the study)

  17. The Research Proposal • Problem/ objective of research • Relevance of research • Literature review • Research question • Hypothesis • Variables of interest • Research method/s • Types of data to be collected - methods, sources • Plan for data analysis • Outline of research report - main chapters, sub-chapters • References/ Bibliography • Schedule • Budget

  18. Composite Measures • Some concepts are not easily represented by a single variable • Particularly attitudes and orientations • E.g. religiosity, alienation, prejudice, etc • Composite measures can help • Can combine information from several variables • Can expand the range of variation • Can be useful for data analysis

  19. Indexes vs. Scales • Both are ordinal measures of variables • Both are composite measures of variables • Indexes are the simple accumulation of scores • “We award a point for each task completed and tally those scores to create an index value” • “Dow Jones Industrial Index” • Scales assign scores to patterns of responses • Consider differences in intensity of the response • Have some sort of logical or empirical order • “We assess the health risk to humans of a mix of pollutants and then award scores on a scale from least to most harmful.”

  20. Indexes vs. Scales Determine an index measure which would be contradicted by a scale measure. For example, if you were to index conservative politicians by votes on conservative bills, you might incorrectly consider a politician who voted for several moderately conservative bills as more conservative than a politician who did not vote for those bills because they were not conservative enough.

  21. Index Construction • Item selection • What should we include in our index? • Face validity, Unidimensionality, Specificity, Variance • Examination of empirical relationships • Are variables related to each other? Generally useful. • No relation – probably should not include one • Too close a relation – probably should not include one • Index Validation • Internal Validation (Item analysis) • Each item should have independent contribution • Should not be perfectly correlated with other another item • External Validation • Index should correlate with other presumed indicators of variable • “People who score strongly on the index should score strongly on other related measures.”

  22. Scale Construction • Bogardus Social Distance Scale • Determines the willingness of people to participate in social relations of varying degrees of closeness with other kinds of people • (Least extreme) • 1. Are you willing to permit immigrants to live in your country? • 2. Are you willing to permit immigrants to live in your community? • 3. Are you willing to permit immigrants to live in your neighborhood? • 4. Are you willing to permit immigrants to live next door to you? • 5. Would you permit your child to marry an immigrant? • (Most extreme) • E.g., agreement with item 3 implies agreement with items 1 and 2. • Guttman Scale • More general case than that above • Binary (Yes/No) responses to a set of questions can be ranked, so that some one answering yes to a more difficult question will also answer yes to an easier one. • A coefficient of reproducibility is used as few Guttman Scales are perfect

  23. Scale Construction • Thurstone Scale • Measures intensity structure among indicators by use of ‘judges’ to assign scores • Likert Scale • Uses standardized response categories in survey questionnaires to determine the relative intensity of different items • Semantic Differential • Two opposing possibilities are given and the respondent selects their gradation between those poles

  24. Scale Construction Likert Scale

  25. Scale Construction Semantic Differential Scale

  26. Typology Classification of observations with respect to two or more attributes.

More Related