1 / 43

Binary Variables (1)

Binary Variables (1). Coin flipping: heads=1, tails=0 Bernoulli Distribution. Binary Variables (2). N coin flips: Binomial Distribution. Binomial Distribution. The Multinomial Distribution. Multinomial distribution is a generalization of the binominal distribution. Different

crwys
Download Presentation

Binary Variables (1)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution

  2. Binary Variables (2) N coin flips: Binomial Distribution

  3. Binomial Distribution

  4. The Multinomial Distribution Multinomial distribution is a generalization of the binominal distribution. Different from the binominal distribution, where the RV assumes two outcomes, the RV for multi-nominal distribution can assume k (k>2) possible outcomes. Let N be the total number of independent trials, mi, i=1,2, ..k, be the number of times outcome i appears. Then, performing N independent trials, the probability that outcome 1 appears m1, outcome 2, appears m2, …,outcome k appears mk times is

  5. The Gaussian Distribution

  6. Moments of the Multivariate Gaussian (1) thanks to anti-symmetry of z

  7. Moments of the Multivariate Gaussian (2)

  8. Central Limit Theorem The distribution of the sum of N i.i.d. random variables becomes increasingly Gaussian as N grows. Example: N uniform [0,1] random variables.

  9. Beta Distribution Beta is a continuous distribution defined on the interval of 0 and 1, i.e., parameterized by two positive parameters a and b. where T(*) is gamma function. beta is conjugate to the binomial and Bernoulli distributions

  10. Beta Distribution

  11. The Dirichlet Distribution The Dirichlet distribution is a continuous multivariate probability distributions parametrizedby a vector of positive realsa. It is the multivariate generalization of the beta distribution. Conjugate prior for the multinomial distribution.

  12. Mixtures of Gaussians (1) Old Faithful data set Single Gaussian Mixture of two Gaussians

  13. Mixtures of Gaussians (2) Combine simple models into a complex model: Component Mixing coefficient K=3

  14. Mixtures of Gaussians (3)

  15. The Exponential Family (1) where ´ is the natural parameter and so g(´) can be interpreted as a normalization coefficient.

  16. The Exponential Family (2.1) The Bernoulli Distribution Comparing with the general form we see that and so Logistic sigmoid

  17. The Exponential Family (2.2) The Bernoulli distribution can hence be written as where

  18. The Exponential Family (3.1) The Multinomial Distribution where, , and NOTE: The ´k parameters are not independent since the corresponding ¹k must satisfy

  19. The Exponential Family (3.2) Let . This leads to and Here the ´k parameters are independent. Note that and Softmax

  20. The Exponential Family (3.3) The Multinomial distribution can then be written as where

  21. The Exponential Family (4) The Gaussian Distribution where

  22. Conjugate priors For any member of the exponential family, there exists a prior Combining with the likelihood function, we get posterior The likelihood and the prior are conjugate if the prior and posterior have the same distribution.

  23. Conjugate priors (cont’d) Beta prior is conjugate to the binomial and Bernoulli distributions Dirichlet prior is conjugate to the multinomial distribution. Gaussian prior is conjugate to the Gaussian distribution

  24. Noninformative Priors (1) With little or no information available a-priori, we might choose uniform prior. ¸ discrete, K-nomial : ¸2[a,b] real and bounded: ¸ real and unbounded: improper!

  25. Nonparametric Methods (1) Parametric distribution models are restricted to specific forms, which may not always be suitable; for example, consider modelling a multimodal distribution with a single, unimodal model. Nonparametric approaches make few assumptions about the overall shape of the distribution being modelled.

  26. Nonparametric Methods (2) Histogram methods partition the data space into distinct bins with widths ¢i and count the number of observations, ni, in each bin. Often, the same width is used for all bins, ¢i = ¢. ¢ acts as a smoothing parameter. In a D-dimensional space, using M bins in each dimen-sion will require MD bins!

  27. Nonparametric Methods (3) Kernel Density Estimation: is a non-parametric way of estimating the probability density function of a random variable Let (x1, x2, …, xn) be an iid sample drawn from some distribution with an unknown density p(x) (Parzenwindow) It follows that k() is the kernel function and h is bandwith, serving as a smoothing parameter. The only parameter is h.

  28. Nonparametric Methods (4) To avoid discontinuities in p(x), use a smooth kernel, e.g. a Gaussian Any kernel such that will work. h acts as a smoother.

  29. Nonparametric Methods (5) Nonparametric models (not histograms) requires storing and computing with the entire data set. Parametric models, once fitted, are much more efficient in terms of storage and computation.

  30. K-Nearest-Neighbours for Classification The k-nearest neighbors algorithm (k-NN) is a method for classifying objects based on closest training examples in the feature space. K = 1 K = 3

  31. K-Nearest-Neighbours for Classification The best choice of k depends upon the data; larger values of k reduce the effect of noise on the classification, but make boundaries between classes less distinct. A good k can be selected by cross-validation.

  32. K-Nearest-Neighbours for Classification (3) • K acts as a smother • For , the error rate of the 1-nearest-neighbour classifier is never more than twice the optimal error (obtained from the true conditional class distributions).

  33. Parametric Estimation Basic building blocks: Need to determine given Maximum Likelihood (ML) Maximum Posterior Probability (MAP)

  34. ML Parameter Estimation Since samples x1, x2, ..,xn are IID, we have The log likelihood can be obtained as q can be obtained by taking the derivative of the Log likelihood with respect to q and setting it to zero

  35. Parameter Estimation (1) ML for Bernoulli Given:

  36. Maximum Likelihood for the Gaussian Given i.i.d. data , the log likeli-hood function is given by Sufficient statistics

  37. Maximum Likelihood for the Gaussian Set the derivative of the log likelihood function to zero, and solve to obtain Similarly

  38. MAP Parameter Estimation Since samples x1, x2, ..,xn are IID, we have Taking the log yields posterior q can be solved by maximizing the log posterior. P(q) is typically chosen to be the conjugate of the likelihood.

  39. Bayesian Inference for the Gaussian (1) Assume ¾2 is known. Given i.i.d. data , the likelihood function for¹ is given by This has a Gaussian shape as a function of ¹ (but it is not a distribution over ¹).

  40. Bayesian Inference for the Gaussian (2) Combined with a Gaussian prior over ¹, this gives the posterior Completing the square over ¹, we see that

  41. Bayesian Inference for the Gaussian (3) … where Note:

  42. Bayesian Inference for the Gaussian (4) Example: for N = 0, 1, 2 and 10.

More Related