530 likes | 676 Views
Dimension Reduction Methods. statistical methods that provide information about point scatters in multivariate space “factor analytic methods” simplify complex relationships between cases and/or variables makes it easier to recognize patterns. How?.
E N D
statistical methods that provide information about point scatters in multivariate space • “factor analytic methods” • simplify complex relationships between cases and/or variables • makes it easier to recognize patterns
How? • identify and describe ‘dimensions’ that underlie the input data • may be more fundamental than those directly measured, and yet hidden from view • reduce the dimensionality of the research problem • benefit = simplification; reduce number of variables you have to worry about • identifying sets of variables with similar “behaviour”
Basic Ideas • imagine a point scatter in multivariate space: • the specific values of the numbers used to describe the variables don’t matter • we can do anything we want to the numbers, provided they don’t distort the spatial relationships that exist among cases • some kinds of manipulations help us think about the shape of the scatter in more productive ways
y bar-y x bar-x orthogonal regression… • imagine a two dimensional scatter of points that show a high degree of correlation …
Why bother? • more “efficient” description • 1st var. captures max. variance • 2nd var. captures the max. amount of residual variance, at right angles (orthogonal) to the first • the 1st var. may capture so much of the information content in the original data set that we can ignore the remaining axis
other advantages… • you can score original cases (and variables) in new space, and plot them… • spatial arrangements may reveal relationships that were hidden in higher dimension space • may reveal subsets of variables based on correlations with new axes…
width length
“shape” “size”
Ritual candelero Cooking Storage / Cooking Service? RITUAL DOMESTIC PRIVATE PUBLIC
Principal Components Analysis (PCA) why: • clarify relationships among variables • clarify relationships among cases when: • significant correlations exist among variables how: • define new axes (components) • examine correlation between axes and variables • find scores of cases on new axes
x1 x2 componentloading pc1 pc2 x3 x4 r = 0 r = -1 r = 1 eigenvalue: sum of all squared loadings on one component
eigenvalues • the sum of all eigenvalues = 100% of variance in original data • proportion accounted for by each eigenvalue = ev/n (n = # of vars.) • correlation matrix; variance in each variable = 1 • if an eigenvalue < 1, it explains less variance than one of the original variables • but .7 may be a better threshold… • ‘scree plots’ – show trade-off between loss of information, and simplification
J. Yellen – San ethnoarchaeology (1977) • CAMP: the camp identification number (1-16.) • LENGTH: the total number of days the camp was occupied. • INDIVID: the number of individuals in the principal period of occupation of the camp. Note that not all individuals were at the camp for the entire LENGTH of occupation. • FAMILY: the number of families occupying the site. • ALS: the absolute limit of scatter; the total area (m²) over which debris was scattered. • BONE: the number of animal bone fragments recovered from the site. • PERS_DAY: the actual number of person-days of occupation (not the product of INDIVID*LENGTH—not all individuals were at the camp for the entire time.)
Correspondence Analysis (CA) • like a special case of PCA — transforms a table of numerical data into a graphic summary • hopefully a simplified, more interpretable display deeper understanding of the fundamental relationships/structure inherent in the data • a map of basic relationships, with much of the “noise” eliminated • usually reduces the dimensionality of the data…
CA – basic ideas • derived from methods of contingency table analysis • most suited for analysis of categorical data: counts, presence-absence data • possibly better to use PCA for continuous (i.e., ratio) data • but, CA makes no assumptions about the distribution of the input variables…
simultaneously R and Q mode analysis • derives two sets of eigenvalues and eigenvectors ( CA axes; analogous to PCA components) • input data is scaled so that both sets of eigenvectors occupy very comparable spaces • can reasonably compare both variables and cases in the same plots
CA output • CA (factor) scores • for both cases and variables • percentage of total inertia per axis • like variance in PCA; relates to dispersal of points around an average value • inertia accounted for by each axis distortion in a graphic display • loadings • correlations between rows/columns and axes • which of the original entities are best accounted for by what axis?
“mass” • as in PCA new axes maximize the spread of observations in rows / columns • spread is measured in inertia, not variance • based on a “chi-squared” distance, and is assessed separately for cases and variables (rows and columns) • contributions to the definition of CA axes is weighted on the basis of row/column totals • ex: pottery counts from different assemblages; larger collections will have more influence than smaller ones
“Israeli political economic concerns” residential codes: As/Af (Asia or Africa) Eu/Am (Europe or America) Is/AA (Israel, dad lives in Asia or Africa) Is/EA (Israel, dad lives in Europe or America) Is/Is (Israel, dad lives in Israel)
“Israeli political economic concerns” “worry” codes ENR Enlisted relative SAB Sabotage MIL Military situation POL Political situation ECO Economic situation OTH Other MTO More than one worry PER Personal economics
Data> Frequency> COUNT Statistics> Data Reduction> CA
Multidimensional Scaling (MDS) • aim: define low-dimension space that preserves the distance between cases in original high-dimension space… • closely related to CA/PCA, but with an iterative location-shifting procedure… • may produce a lower-dimension solution than CA/PCA • not simultaneously Q and R mode…
A B C D A B C D ‘non-metric’ MDS ‘metric’ MDS
Discriminant Analysis (DFA) • aims: • calculate a function that maximizes the ability to discriminate among 2 or more groups, based on a set of descriptive variables • assess variables in terms of their relative importance and relevance to discrimination • classify new cases not included in the original analysis
var B var A
DFA • DFs = groups-1 • each subsequent function is orthogonal to the last • associated with eigenvalues that reflect how much ‘work’ each function does in discriminating between groups • stepwise vs. complete DFA
Figure 6.5: Factor structure coefficients: These values show the correlation between Miccaotli ceramic categories and the first two discriminant functions. Categories exhibiting high positive or negative values are the most important for discriminating among A-clusters.
Figure 6.4: Case scores calculated for the first two functions generated by discriminant analysis, using Miccaotli A-cluster membership as the grouping variable and posterior estimates of ceramic category proportions as discriminating variables.
Figure 6.6: Factor structure coefficients generated by four separate DFA analyses using binary grouping variables derived from Miccaotli A-cluster memberships. A single discriminant function is associated with each A-cluster.