1 / 51

How accurately can you (1) predict Y from X , and (2) predict X from Y ?

Explore the accuracy of predicting Y from X and vice versa using joint probability density functions and covariance.

ttodd
Download Presentation

How accurately can you (1) predict Y from X , and (2) predict X from Y ?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How accurately can you (1) predict Y from X, and (2) predict X from Y? 6 5 4 Y 3 2 1 4/9 2/9 1/9 1/9 1/9 1 2 3 4 5 6 X

  2. How accurately can you (1) predict Y from X, and (2) predict X from Y? 6 5 4 Y 3 2 1 1/12 1/12 1/12 1/6 1/6 1/6 1/12 1/12 1/12 1 2 3 4 5 6 X

  3. How accurately can you (1) predict Y from X, and (2) predict X from Y? 6 5 4 Y 3 2 1 2/17 1/17 2/17 1/17 2/17 1/17 2/17 1/17 2/17 1/17 2/17 1 2 3 4 5 6 X

  4. How accurately can you (1) predict Y from X, and (2) predict X from Y? 6 5 4 Y 3 2 1 1/5 1/5 1/5 1/5 1/5 1 2 3 4 5 6 X

  5. For continuous type random variables (X, Y), the definitions of joint probability density function (joint p.d.f.), independence of X and Y, and mathematical expectation are each analogous to those for discrete type random variables, with summation signs replaced by integral signs. The covariance between random variables X and Y is Cov(X,Y) = E[(X– X)(Y–Y)] = E[XY– Y X – XY + XY] = E(XY) – YE(X) – XE(Y) + XY = E(XY) – XY Cov(X,Y)  = ———— XY The correlation between random variables X and Y is Note: A proof that – 1    1 will be available shortly. y x

  6. Return to Class Exercise #1: Using the joint p.m.f., E(X–Y) = (1–1)(1/6) + (1–2)(1/4) + (2–1)(1/4) + (2–2)(1/3) = 0 Alternatively, E(X–Y) = 19/12 – 19/12 = 0 E(X + Y) can be interpreted as the mean of the total weight of candy in the bag. E(X–Y) can be interpreted as the mean of how much more the red candy in the bag weighs than the green candy. E(XY) = (1)(1)(1/6) + (1)(2)(1/4) + (2)(1)(1/4) + (2)(2)(1/3) = 5/2 1 – —— 144 Cov(X,Y) = E(XY) – E(X) E(Y) = (5/2) – (19/12)(19/12) =

  7. 1. - continued Cov(X,Y) – 1 / 144 1 ———— = ———————— = – — XY 35/144  35/144 35  = The least squares lines for predicting Y from X is The least squares lines for predicting X from Y is

  8. 2. - continued Return to Class Exercise #2: 1 – — 9 Cov(X,Y) = E(XY) – E(X) E(Y) = (16/3) – (7/3)(7/3) = Cov(X,Y) – 1 / 9 1 ———— = ————— = – — XY 5/9  5/9 5  = The least squares lines for predicting Y from X is The least squares lines for predicting X from Y is

  9. 3. - continued Return to Class Exercise #3: 3 3   (xy) (xy / 36) = x = 1 y = 1 3 3   (xy) (x / 6) (y / 6) = x = 1 y = 1 E(XY) = 3 3  (x) (x / 6)  (y) (y / 6) = x = 1 y = 1 E(X) E(Y) = (7/3)(7/3) = 49 / 9 Cov(X,Y) = E(XY) – E(X) E(Y) = (49/9) – (7/3)(7/3) = 0  = 0 The least squares lines for predicting Y from X is The least squares lines for predicting X from Y is

  10. 6. - continued Return to Class Exercise #6: Since _________________________, then the random variables X and Y _______________ independent f(x, y) f1(x)f2(y) are not (as we previously noted). Using the joint p.m.f., E(XY) = (1)(1)(1/6) + (3)(1)(1/6) + (2)(2)(1/6) + (3)(2)(1/6) + (2)(1)(1/3) = 3 1 — 9 Cov(X,Y) = E(XY) – E(X) E(Y) = 3 – (13/6)(4/3) = Cov(X,Y) 1 / 9 2 ———— = ——————— = —— XY 17 / 36  2 / 9 34  = The least squares lines for predicting Y from X is

  11. 7. - continued Return to Class Exercise #7: 2 2   E(XY) = x2y + xy2 ——— dx dy = 8 xy f(x, y)dx dy = –  –  0 0 2 2 2 2 2x3y + 3x2y2 ————— dy = 48 4y + 3y2 ———— dy = 12 2y2 + y3 ——— = 12 4 — 3 y = 0 x = 0 0 0 1 – — 36 Cov(X,Y) = E(XY) – E(X) E(Y) = 4/3 – (7/6)(7/6) = Cov(X,Y) – 1 / 36 ———— = ———————— = XY 11 / 36  11 / 36 1 – — 11  =

  12. 9. - continued Return to Class Exercise #9: Since _________________________, then the random variables X and Y _______________ independent f(x, y) f1(x)f2(y) are not (as we previously noted).   1 2y 5xy2 xy —— dx dy = 2 E(XY) = xy f(x, y)dx dy = –  –  0 0 1 2y 1 1 1 2y 5x2y3 —— dxdy = 2 5x3y3 —— dy = 6 20y6 —— dy = 3 20y7 —– = 21 20 — 21 x = 0 y = 0 0 0 0 0 5 —– 189 E(XY) – E(X) E(Y) = 20/21 – (10/9)(5/6) = Cov(X,Y) = Cov(X,Y) 5 / 189 ———— = ————————— = XY 110 / 567  5 / 252  =  2 / 11

  13. For continuous type random variables (X, Y), the definitions of joint probability density function (joint p.d.f.), independence of X and Y, and mathematical expectation are each analogous to those for discrete type random variables, with summation signs replaced by integral signs. The covariance between random variables X and Y is Cov(X,Y) = E[(X– X)(Y–Y)] = E[XY– Y X – XY + XY] = E(XY) – YE(X) – XE(Y) + XY = E(XY) – XY Cov(X,Y)  = ———— XY The correlation between random variables X and Y is Note: A proof that – 1    1 will be available shortly. y Consider the equation of a line y = a + bx which comes “closest” to predicting the values of the random variable Y from the random variable X in the sense that E{[Y– (a + bX)]2} is minimized. x

  14. We let k(a,b) = E{[Y – (a + bX)]2} = E{[(Y– Y) – b(X – X) – (a – Y + bX)]2} = E[(Y– Y)2+ b2(X – X)2+ (a – Y + bX)2 – 2b(Y– Y)(X – X) – 2(Y– Y)(a – Y + bX)+ 2b(X – X)(a – Y + bX)] = Y2+b2X2 + (a – Y + bX)2–2bXY– 2(0) + 2(0) = Y2+b2X2 + (a – Y + bX)2–2bXY To minimize k(a,b) , we set the partial derivatives with respect to a and b equal to zero. (Note: This is textbook exercise 4.2-5.) k — = a 2(a – Y + bX)= 0 k — = b 2bX2 + 2(a – Y + bX)X– 2XY = 0 (Multiply the first equation by X, subtract the resulting equation from the second equation, and solve for b. Then substitute in place of b in the first equation to solve for a.)

  15. Note: A proof that – 1    1 is available by observing that all values of k(a,b), and in particular its minimum value, must be a nonnegative. Y — X b = The least squares line for predicting Y from X is YY y = Y– — X + — x XX Y Y– — X X a = The least squares line for predicting Y from X can be written Y y = Y+ — (x – X) X The least squares line for predicting X from Y can be written X x = X+ — (y – Y) Y Note that the point (X , Y) is always on each of the least squares lines.

  16. Return to Class Exercise #1: Using the joint p.m.f., E(X–Y) = (1–1)(1/6) + (1–2)(1/4) + (2–1)(1/4) + (2–2)(1/3) = 0 Alternatively, E(X–Y) = 19/12 – 19/12 = 0 E(X + Y) can be interpreted as the mean of the total weight of candy in the bag. E(X–Y) can be interpreted as the mean of how much more the red candy in the bag weighs than the green candy. E(XY) = (1)(1)(1/6) + (1)(2)(1/4) + (2)(1)(1/4) + (2)(2)(1/3) = 5/2 1 – —— 144 Cov(X,Y) = E(XY) – E(X) E(Y) = (5/2) – (19/12)(19/12) =

  17. 1. - continued Cov(X,Y) – 1 / 144 1 ———— = ———————— = – — XY 35/144  35/144 35  = The least squares lines for predicting Y from X is 19 1 19 y = — –— x – — 12 35 12 Y b = — = X 1  35/144 1 – — ———— = – — 35  35/144 35 The least squares lines for predicting X from Y is 19 1 19 x = — –— y – — 12 35 12 X 1 b = — =– — Y 35

  18. 2. - continued Return to Class Exercise #2: 1 – — 9 Cov(X,Y) = E(XY) – E(X) E(Y) = (16/3) – (7/3)(7/3) = Cov(X,Y) – 1 / 9 1 ———— = ————— = – — XY 5/9  5/9 5  = The least squares lines for predicting Y from X is 7 1 7 y = — –— x – — 3 5 3 Y b = — = X 1  5/9 1 – — ——— = – — 5  5/9 5 The least squares lines for predicting X from Y is 7 1 7 x = — –— y – — 3 5 3 X 1 b = — =– — Y 5

  19. 3. - continued Return to Class Exercise #3: 3 3   (xy) (xy / 36) = x = 1 y = 1 3 3   (xy) (x / 6) (y / 6) = x = 1 y = 1 E(XY) = 3 3  (x) (x / 6)  (y) (y / 6) = x = 1 y = 1 E(X) E(Y) = (7/3)(7/3) = 49 / 9 Cov(X,Y) = E(XY) – E(X) E(Y) = (49/9) – (7/3)(7/3) = 0  = 0 7 y = — 3 The least squares lines for predicting Y from X is Y b = — = X 0 7 x = — 3 The least squares lines for predicting X from Y is b = 0

  20. 6. - continued Return to Class Exercise #6: Since _________________________, then the random variables X and Y _______________ independent f(x, y) f1(x)f2(y) are not (as we previously noted). Using the joint p.m.f., E(XY) = (1)(1)(1/6) + (3)(1)(1/6) + (2)(2)(1/6) + (3)(2)(1/6) + (2)(1)(1/3) = 3 1 — 9 Cov(X,Y) = E(XY) – E(X) E(Y) = 3 – (13/6)(4/3) = Cov(X,Y) 1 / 9 2 ———— = ——————— = —— XY 17 / 36  2 / 9 34  = The least squares lines for predicting Y from X is 4 4 13 y = — +— x – — 3 17 6 Y b = — = X 2  2 / 9 4 —— ———— = — 34  17 / 36 17

  21. The least squares lines for predicting X from Y is 13 1 4 x = — +— y – — 6 2 3 X b = — = Y 2  17 / 36 1 —— ———— = — 34  2 / 9 2 The conditional p.m.f. of Y | X = 1 is Y | X = 2 is Y | X = 3 is

  22. 7. - continued Return to Class Exercise #7: 2 2   E(XY) = x2y + xy2 ——— dx dy = 8 xy f(x, y)dx dy = –  –  0 0 2 2 2 2 2x3y + 3x2y2 ————— dy = 48 4y + 3y2 ———— dy = 12 2y2 + y3 ——— = 12 4 — 3 y = 0 x = 0 0 0 1 – — 36 Cov(X,Y) = E(XY) – E(X) E(Y) = 4/3 – (7/6)(7/6) = Cov(X,Y) – 1 / 36 ———— = ———————— = XY 11 / 36  11 / 36 1 – — 11  =

  23. The least squares lines for predicting Y from X is 7 1 7 y = — –— x – — 6 11 6 Y b = — = X 1  11 / 36 1 – — ———— = – — 11  11 / 36 11 The least squares lines for predicting X from Y is 7 1 7 x = — –— y – — 6 11 6 X b = — = Y 1  11 / 36 1 – — ———— = – — 11  11 / 36 11 For 0 < x < 2, the conditional p.d.f. of Y | X = x is

  24. 8. - continued Return to Class Exercise #8: Since _________________________, then the random variables X and Y _______________ independent f(x, y) =f1(x)f2(y) are 3   3 1 — dx= x y2–y —— dy 2 y– 1 —— dx dy = 2x2 E(XY) = xy 1 1 1 1  1 — dx= x 7 — 3  Cov(X,Y) = 1 Since E(XY) = , Cov(X,Y) and  do not exist.  =

  25. The least squares lines for predicting Y from X is Since  does not exist. neither least squares line exists. The least squares lines for predicting X from Y is For 1 < x , the conditional p.d.f. of Y | X = x is For 1 < y < 3, the conditional p.d.f. of X | Y = y is For 1 < x , E(Y | X = x) = For 1 < x , Var(Y | X = x) = For 0 < y < 3, E(X | Y = y) = For 0 < y < 3, Var(X | Y = y) =

  26. 9. - continued Return to Class Exercise #9: Since _________________________, then the random variables X and Y _______________ independent f(x, y) f1(x)f2(y) are not (as we previously noted).   1 2y 5xy2 xy —— dx dy = 2 E(XY) = xy f(x, y)dx dy = –  –  0 0 1 2y 1 1 1 2y 5x2y3 —— dxdy = 2 5x3y3 —— dy = 6 20y6 —— dy = 3 20y7 —– = 21 20 — 21 x = 0 y = 0 0 0 0 0 5 —– 189 E(XY) – E(X) E(Y) = 20/21 – (10/9)(5/6) = Cov(X,Y) = Cov(X,Y) 5 / 189 ———— = ————————— = XY 110 / 567  5 / 252  =  2 / 11

  27. The least squares lines for predicting Y from X is 5 3 10 y = — +— x – — 6 22 9 Y b = — = X  5 / 252 3 ————— = — 110 / 567 22  2 / 11 The least squares lines for predicting X from Y is 10 4 5 x = — +— y – — 9 3 6 X b = — = Y  110 / 567 4 ————— = —  5 / 252 3 4 = — y 3  2 / 11 For 0 < x < 2, the conditional p.d.f. of Y | X = x is

  28. 10. (a) Suppose f1(x) is the p.d.f. for the random variable X having space S1 , suppose f2(y) is the p.d.f. for the random variable Y having space S2 , and suppose the joint p.d.f. for (X, Y) is f(x,y) = f1(x)f2(y) so that X and Y are independent. Show that E[u1(X) u2(Y)] = E[u1(X)] E[u2(Y)] , if all expectations exist. E[u1(X) u2(Y)] = u1(x) u2(y) f1(x) f2(y) dx dy = S1S2 u2(y) f2(y) u1(x) f1(x) dx dy = u2(y) f2(y) E[u1(X)] dy = S2 S1 S2 E[u1(X)] E[u2(Y)] E[u1(X)] u2(y) f2(y) dy = S2

  29. (b) (c) Show that Cov(X,Y) = XY = 0 if they exist. Cov(X, Y) = E(XY) – E(X) E(Y) = E(X) E(Y) – E(X) E(Y) = 0 which implies that X,Y = 0. Explain why the results in parts (a) and (b) are still true when the p.d.f.s are replaced by p.m.f.s (that is, if X and Y are random variables of the discrete type instead of random variables of the continuous type). The integral signs in part (a) can be replaced with summation signs.

  30. 11. (a) (b) Suppose X has p.m.f. f1(x) = 1/3 if x = –1, 0, 1 . Let Y = X2 have p.m.f. f2(y), and let f(x,y) be the joint p.m.f. for (X, Y). Find the marginal p.m.f. of Y. The space of Y is {0, 1}. f2(y) = (2/3)y(1/3)1–y if y = 0, 1 We recognize that Y has a distribution. Bernoulli(2/3) Find and graphically display the space for (X, Y). Then find the joint p.m.f. for (X, Y). The space of (X, Y) is {(–1,1) (0,0) (1,1)}. 1 y 0 1/3 1/3 1/3 – 1 0 1 x The joint p.m.f. of (X, Y) is f(x, y) = 1/3 if (x,y) = (–1,1) (0,0) (1,1)

  31. (c) (d) Find the correlation between X and Y. E(X) = 0 Var(X) = 2/3 E(Y) = 2/3 Var(Y) = 2/9 E(XY) = 0 Cov(X,Y) = E(XY) – E(X) E(Y) = 0  = 0 Are X and Y independent? Does this answer depend on the value of  (found in part (c))? X and Y cannot be independent, since the joint space is not “rectangular”. In fact, Y is totally determined by X. If X and Y were independent, then their correlation must be zero; however, when the correlation between two random variables is zero, the random variables may or may not be independent.

  32. 12. (a) (b) Suppose (X , Y) have joint p.d.f. f(x,y) = 1 if 0 < y < 1 , – y < x < y . Graphically display the space for (X, Y). y (–1,1) (1,1) x (0,0) Complete the following alternative description for the space for (X, Y): {(x,y) | –1 < x < 0 , < y < }  {(x,y) | 0 < x < 1 , < y < } – x 1 x 1

  33. (c) Find each of the following: The marginal p.d.f. of X is f1(x) =  if –1 < x < 0 1 + x Two separate formulas needed f(x, y)dy = 1 –x if 0 < x < 1 –  completing this exercise is homework! 0 1 E(X) = x(1 + x) dx + x(1 –x) dx = 0 –1 0

  34. 12. - continued The marginal p.d.f. of Y is f2(y) = 2y if 0 < y < 1 1 E(Y) = 2y2dy = 2/3 0

  35. y 1 E(XY) = xydx dy = 0 – y 0 Cov(X,Y) = E(XY) – E(X) E(Y) = 0  = 0 (d) Are X and Y independent? Does this answer depend on the value of  (found in part (c))? X and Y cannot be independent, since the joint space is not “rectangular”. If X and Y were independent, then their correlation must be zero; however, when the correlation between two random variables is zero, the random variables may or may not be independent.

  36. Name the distribution of a random variable which (a) is one if a specified event occurs and zero otherwise. (b) is the number of times a specified event occurs in a set number of independent trials. (c) is the number of times a specified event occurs in a set number of trials which are not independent due to selection without replacement. (d) is the number of times a specified event occurs in an interval. (e) is the number of independent trials until a specified event occurs. (f) is the number of independent trials until a specified event occurs a set number of times. (g) is the length until a specified event occurs. (h) is the length until a specified event occurs a set number of times. (i) is a random number selected from an interval. (j) is the square of a standard normal random variable. (k) is a random number selected from the positive integers which are less than or equal to a set integer.

  37. (l) can be changed into a standard normal random variable, when the mean of the distribution is subtracted and this difference divided by standard deviation of the distribution. (a) Bernoulli distribution (b) binomial distribution (c) hypergeometric distribution (d) Poisson distribution (e) geometric distribution (f) negative binomial distribution exponential distribution (g) gamma distribution (h) uniform distribution on the interval (i) chi-square distribution with one degree of freedom (j) uniform distribution on the first set number of positive integers (k) (l) any normal distribution

  38. 13. (a) Let X1 and X2 be any two random variables having finite means and variances, with E(X1) = 1 , E(X2) = 2 , Var(X1) = 12 , and Var(X2) = 22 . Let a1 and a2 be constants. If X1 and X2 are independent, find each of the following: E(X1 + X2) = E(X1) + E(X2) = 1 + 2 E(a1X1 + a2X2) = E(a1X1) + E(a2X2) = a1E(X1) + a2E(X2) = a11 + a22 Var(X1 + X2) = E{[(X1 + X2) – (1 + 2)]2} = E{[(X1–1) + (X2–2)]2} = E{(X1–1)2+ 2(X1–1)(X2–2) + (X2–2)2} = E{(X1–1)2} + 2E{(X1–1)(X2–2)} + E{(X2–2)2} = 12+ 2(0) +22 = 12+ 22

  39. Var(a1X1 + a2X2) = E{[(a1X1 + a2X2) – (a11 + a22)]2} = E{[a1(X1–1) +a2(X2–2)]2} = E{a12(X1–1)2+ 2a1a2(X1–1)(X2–2) +a22(X2–2)2} = a12 E{(X1–1)2} + 2a1a2E{(X1–1)(X2–2)} +a22 E{(X2–2)2} = a12 12+ 2a1a2(0) +a22 22 = a12 12+ a22 22

  40. 13. - continued (b) If the correlation between X1 and X2 is 12 , find each of the following: E(X1 + X2) = E(X1) + E(X2) = 1 + 2 E(a1X1 + a2X2) = E(a1X1) + E(a2X2) = a1E(X1) + a2E(X2) = a11 + a22 Var(X1 + X2) = (using work done in part (a)) E{(X1–1)2} + 2E{(X1–1)(X2–2)} + E{(X2–2)2} = 12+ 21212+22 = 12+ 22+ 21212

  41. Var(a1X1 + a2X2) = (using work done in part (a)) a12 E{(X1–1)2} + 2a1a2E{(X1–1)(X2–2)} +a22 E{(X2–2)2} = a2121+ 2a1a21212+a2222 = a1212+ a22 22+ 2a1a21212 (c) If the correlation between X1 and X2 is zero (0) but X1 and X2 are not independent, how are the formulas in part (a) affected? The formulas in part (a) remain the same if the correlation between X1 and X2 is zero (0), regardless of whether X1 and X2 are independent.

  42. 14. (a) In doing each part of this exercise, note that the formulas derived in the previous exercise will be useful. Let the random variables X and Y be as in Class Exercise #11. From calculations done in that exercise, complete the following: E(X) = Var(X) = E(Y) = Var(Y) = XY = Use this information to find each of the following: 0 2/3 2/3 2/9 0 E(X + Y) = 0 + 2/3 = 2/3 Var(X + Y) = 2/3 + 2/9 = 8/9 E(X–Y) = 0 – 2/3 = – 2/3

  43. Var(X–Y) = 2/3 + 2/9 = 8/9 E(2X– 5Y) = 2(0) – 5(2/3) = – 10/3 Var(2X– 5Y) = 22(2/3) + (–5)2(2/9) = 74/9

  44. 14. - continued (b) Let the random variables X and Y be as in Class Exercise #9. From calculations done in that exercise, complete the following: E(X) = Var(X) = E(Y) = Var(Y) = XY = Use this information to find each of the following: 10/9 110/567 5/6 5/252  2 / 11 E(X + Y) = 10/9 + 5/6 = 35/18 Var(X + Y) = 110/567 + 5/252 + 2(2/11)1/2(110/567)1/2(5/252)1/2 = 485/2268 + 10/189 = 605/2268 E(X–Y) = 10/9 – 5/6 = 5/18

  45. Var(X–Y) = 110/567 + 5/252 – 2(2/11)1/2(110/567)1/2(5/252)1/2 = 485/2268 – 20/378 = 365/2268 E(2X– 5Y) = 2(10/9) – 5(5/6) = – 35/18 Var(2X– 5Y) = 22(110/567) + (–5)2(5/252) + 2(2)(–5)(2/11)1/2(110/567)1/2(5/252)1/2 = 2885/2268 – 200/378 = 1685/2268

  46. 14. - continued (c) Let the random variables X and Y be as in Class Exercise #3. From calculations done in that exercise, complete the following: E(X) = E(Y) = Var(X) = Var(Y) = XY = Use this information to find each of the following: 7/3 5/9 0 E(X + Y) = 7/3 + 7/3 = 14/3 5/9 + 5/9 = 10/9 Var(X + Y) = E(X–Y) = 7/3 – 7/3 = 0 Var(X–Y) = 5/9 + 5/9 = 10/9 E(2X– 5Y) = 2(7/3) – 5(7/3) = – 7 Var(2X– 5Y) = 22(5/9) + (–5)2(5/9) = 145/9

  47. (d) Let the random variables X and Y be as in Class Exercise #2. From calculations done in that exercise, complete the following: E(X) = E(Y) = Var(X) = Var(Y) = XY = Use this information to find each of the following: 7/3 5/9 –1/5 E(X + Y) = 7/3 + 7/3 = 14/3 completing this exercise is homework! 5/9 + 5/9 + 2(–1/5)(5/9)1/2(5/9)1/2 = 8/9 Var(X + Y) = E(X–Y) = 7/3 – 7/3 = 0 Var(X–Y) = 5/9 + 5/9 – 2(–1/5)(5/9)1/2(5/9)1/2 = 4/3 E(2X– 5Y) = 2(7/3) – 5(7/3) = – 7 Var(2X– 5Y) = 22(5/9) + (–5)2(5/9) + (2)(2)(–5)(–1/5)(5/9)1/2(5/9)1/2 = 165/9 = 55/3

  48. 15. (a) Let X1 , X2 , … , Xn be any n random variables having finite means and variances, with E(X1) = 1 , E(X2) = 2 , … , E(Xn) = n , Var(X1) = 12 , Var(X2) = 22 , … . Var(Xn) = n2 . Let a1 , a2 , … , an be constants and define the random variable Y = a1X1 + a2X2 + … + anXn = n  aiXi . i = 1 If X1 , X2 , … , Xn are (mutually) independent, find each of the following: E(Y) = E(a1X1 + a2X2 + … + anXn) = E(a1X1) + E(a2X2) + … + E(anXn) = a1E(X1) + a2E(X2) + … + anE(Xn) = n aii i = 1 a11 + a22 + … + ann= Var(Y) = Var(a1X1 + a2X2 + … + anXn) = E{[(a1X1 + a2X2 + … + anXn) – (a11 + a22 + … + ann)]2} =

  49. E{[a1(X1–1) +a2(X2–2) + … + an(Xn–n)]2} = E{a12(X1–1)2+ … +an2(Xn–n)2 + 2a1a2(X1–1)(X2–2) +… + 2a1an(X1–1)(Xn–n) + 2a2a3(X2–2)(X3–3) +… + 2a2an(X2–2)(Xn–n) + … + 2an–1an(Xn–1–n–1)(Xn–n)} = a12 E{(X1–1)2} + … +an2 E{(Xn–n)2} + 2a1a2E{(X1–1)(X2–2)} +… + 2a1anE{(X1–1)(Xn–n)} + 2a2a3E{(X2–2)(X3–3)} +… + 2a2anE{(X2–2)(Xn–n)} + + … + 2an–1anE{(Xn–1–n–1)(Xn–n)} = a12 E{(X1–1)2} + a22 E{(X2–2)2} + … +an2 E{(Xn–n)2} + 0 = n a2i2i i = 1 a2121+ a2222+ … +a2n2n =

  50. 15. - continued (b) Suppose that for any pair of distinct random variables Xi and Xj , the correlation is ij ; then find each of the following: E(Y) = E(a1X1 + a2X2 + … + anXn) = (same as in part (a)) n aii i = 1 a11 + a22 + … + ann= Var(Y) = Var(a1X1 + a2X2 + … + anXn) = …(same as in part (a)) = a12 E{(X1–1)2} + … +an2 E{(Xn–n)2} + 2a1a2E{(X1–1)(X2–2)} +… + 2a1anE{(X1–1)(Xn–n)} + 2a2a3E{(X2–2)(X3–3)} +… + 2a2anE{(X2–2)(Xn–n)} + + … + 2an–1anE{(Xn–1–n–1)(Xn–n)} =

More Related