500 likes | 708 Views
Slides by JOHN LOUCKS St. Edward’s University. Chapter 3, Part B Descriptive Statistics: Numerical Measures. Measures of Distribution Shape, Relative Location, and Detecting Outliers. Exploratory Data Analysis. Measures of Association Between Two Variables.
E N D
Slides by JOHN LOUCKS St. Edward’s University
Chapter 3, Part B Descriptive Statistics: Numerical Measures • Measures of Distribution Shape, Relative Location, and Detecting Outliers • Exploratory Data Analysis • Measures of Association Between Two Variables • The Weighted Mean and Working with Grouped Data
Measures of Distribution Shape,Relative Location, and Detecting Outliers • Distribution Shape • z-Scores • Chebyshev’s Theorem • Empirical Rule • Detecting Outliers
Distribution Shape: Skewness • An important measure of the shape of a distribution is called skewness. • The formula for computing skewness for a data set is somewhat complex. • Skewness can be easily computed using statistical software. • The median provides the preferred measure of location when the data are highly skewed.
.35 .30 .25 .20 .15 .10 .05 0 Distribution Shape: Skewness • Symmetric (not skewed) • Skewness is zero. • Mean and median are equal. Skewness = 0 Relative Frequency
.35 .30 .25 .20 Relative Frequency .15 .10 .05 0 Distribution Shape: Skewness • Moderately Skewed Left • Skewness is negative. • Mean will usually be less than the median. Skewness = - .31
.35 .30 .25 .20 Relative Frequency .15 .10 .05 0 Distribution Shape: Skewness • Moderately Skewed Right • Skewness is positive. • Mean will usually be more than the median. Skewness = .31
.35 .30 .25 .20 Relative Frequency .15 .10 .05 0 Distribution Shape: Skewness • Highly Skewed Right • Skewness is positive (often above 1.0). • Mean will usually be more than the median. Skewness = 1.25
Distribution Shape: Skewness • Example: Apartment Rents Seventy efficiency apartments were randomly sampled in a small college town. The monthly rent prices for these apartments are listed in ascending order on the next slide.
Distribution Shape: Skewness • Example: Apartment Rents
.35 .30 .25 .20 .15 .10 .05 0 Distribution Shape: Skewness Skewness = .92 Relative Frequency
z-Scores The z-score is often called the standardized value. It denotes the number of standard deviations a data value xi is from the mean.
z-Scores • An observation’s z-score is a measure of the relative location of the observation in a data set. • A data value less than the sample mean will have a • z-score less than zero. • A data value greater than the sample mean will have • a z-score greater than zero. • A data value equal to the sample mean will have a • z-score of zero.
z-Scores • z-Score of Smallest Value (425) Standardized Values for Apartment Rents
Chebyshev’s Theorem At least (1 - 1/z2) of the items in any data set will be within z standard deviations of the mean, where z is any value greater than 1. Chebyshev’s Theorem tells us the proportion of data values that must be within a specified number of standard deviations of the mean regardless of the shape of the distribution of the data.
At least of the data values must be within of the mean. 75% z = 2 standard deviations At least of the data values must be within of the mean. 89% z = 3 standard deviations At least of the data values must be within of the mean. 94% z = 4 standard deviations Chebyshev’s Theorem
Let z = 1.5 with = 490.80 and s = 54.74 - z(s) = 490.80 - 1.5(54.74) = 409 + z(s) = 490.80 + 1.5(54.74) = 573 Chebyshev’s Theorem For example: At least (1 - 1/(1.5)2) = 1 - 0.44 = 0.56 or 56% of the rent values must be between and (Actually, 86% of the rent values are between 409 and 573.)
of the values of a normal random variable are within of its mean. 68.26% +/- 1 standard deviation of the values of a normal random variable are within of its mean. 95.44% +/- 2 standard deviations of the values of a normal random variable are within of its mean. 99.72% +/- 3 standard deviations Empirical Rule For data having a bell-shaped distribution:
99.72% 95.44% 68.26% Empirical Rule x m m + 3s m – 3s m – 1s m + 1s m – 2s m + 2s
Detecting Outliers • An outlier is an unusually small or unusually large • value in a data set. • A data value with a z-score less than -3 or greater • than +3 might be considered an outlier. • It might be: • an incorrectly recorded data value • a data value that was incorrectly included in the • data set • a correctly recorded data value that belongs in • the data set • Check for outliers before analyzing data.
Detecting Outliers • The most extreme z-scores are -1.20 and 2.27 • Using |z| > 3 as the criterion for an outlier, there are no outliers in this data set. Standardized Values for Apartment Rents
Exploratory Data Analysis • Five-Number Summary • Box Plot
Five-Number Summary 1 Smallest Value 2 First Quartile 3 Median 4 Third Quartile 5 Largest Value
Five-Number Summary Lowest Value = 425 First Quartile = 445 Median = 475 Largest Value = 615 Third Quartile = 525 First place the data in ascending order.
625 450 375 400 500 525 550 575 600 425 475 Box Plot • A box is drawn with its ends located at the first and third quartiles. • A vertical line is drawn in the box at the location of • the median (second quartile). Q1 = 445 Q3 = 525 Q2 = 475
Box Plot • Limits are located (not drawn) using the interquartile range (IQR). • Data outside these limits are considered outliers. • The locations of each outlier is shown with the symbol“*“. • … continued
Box Plot • The lower limit is located 1.5(IQR) below Q1. Lower Limit: Q1 - 1.5(IQR) = 445 - 1.5(80) = 325 • The upper limit is located 1.5(IQR) above Q3. Upper Limit: Q3 + 1.5(IQR) = 525 + 1.5(80) = 645 • There are no outliers (values less than 325 or • greater than 645) in the apartment rent data.
625 450 375 400 500 525 550 575 600 425 475 Box Plot • Whiskers (dashed lines) are drawn from the ends of the box to the smallest and largest data values inside the limits. Smallest value inside limits = 425 Largest value inside limits = 615
Measures of Association Between Two Variables • Covariance • Correlation Coefficient
Covariance The covariance is a measure of the linear association between two variables. Positive values indicate a positive linear relationship. Negative values indicate a negative linear relationship. Values near zero indicate no linear relationship. The variables can have a non-linear relationship which is not detected by the covariance measure.
Covariance The covariance is computed as follows: for samples for populations
Correlation Coefficient Correlation is a measure of linear association and not causation. Just because two variables are highly correlated, it does not mean that one variable is the cause of the other.
Correlation Coefficient The correlation coefficient is computed as follows: for samples for populations
Correlation Coefficient The coefficient can take on values between -1 and +1. Values near -1 indicate a strong negative linear relationship. Values near +1 indicate a strong positive linear relationship. Values near zero indicate no linear relationship. The variables can have a non-linear relationship.
Covariance and Correlation Coefficient A golfer is interested in investigating the relationship, if any, between driving distance and 18-hole score. Average Driving Distance (yds.) Average 18-Hole Score 69 71 70 70 71 69 277.6 259.5 269.1 267.0 255.6 272.9
Covariance and Correlation Coefficient x y 69 71 70 70 71 69 -1.0 1.0 0 0 1.0 -1.0 277.6 259.5 269.1 267.0 255.6 272.9 10.65 -7.45 2.15 0.05 -11.35 5.95 -10.65 -7.45 0 0 -11.35 -5.95 Average Total 267.0 70.0 -35.40 Std. Dev. 8.2192 .8944
Covariance and Correlation Coefficient • Sample Covariance • Sample Correlation Coefficient
The Weighted Mean andWorking with Grouped Data • Weighted Mean • Mean for Grouped Data • Variance for Grouped Data • Standard Deviation for Grouped Data
Weighted Mean • When the mean is computed by giving each data • value a weight that reflects its importance, it is • referred to as a weighted mean. • In the computation of a grade point average (GPA), • the weights are the number of credit hours earned for • each grade. • When data values vary in importance, the analyst • must choose the weight that best reflects the • importance of each value.
Weighted Mean where: xi= value of observation i wi = weight for observation i In the numerator, you sum the products. In the denominator you sum the weights.
Grouped Data • The weighted mean computation can be used to • obtain approximations of the mean, variance, and • standard deviation for the grouped data. • To compute the weighted mean, we treat the • midpoint of each class as though it were the mean • of all items in the class. • We compute a weighted mean of the class midpoints • using the class frequencies as weights. • Similarly, in computing the variance and standard • deviation, the class frequencies are used as weights.
Mean for Grouped Data • Sample Data • Population Data where: fi = frequency of class i Mi = midpoint of class i
Sample Mean for Grouped Data Given below is the previous sample of monthly rents for 70 efficiency apartments, presented here as grouped data in the form of a frequency distribution.
Sample Mean for Grouped Data This approximation differs by $2.41 from the actual sample mean of $490.80.
Variance for Grouped Data • For sample data • For population data
continued Sample Variance for Grouped Data
Sample Variance for Grouped Data • Sample Variance s2 = 208,234.29/(70 – 1) = 3,017.89 • Sample Standard Deviation This approximation differs by only $.20 from the actual standard deviation of $54.74.