1 / 23

Linear Model (III)

Linear Model (III). Rong Jin. Announcement. Homework 2 is out and is due 02/05/2004 (next Tuesday) Homework 1 is handed out. Recap: Logistic Regression Model. Assume the inputs and outputs are related in the log linear function Estimate weights: MLE approach. Example: Text Classification.

yamin
Download Presentation

Linear Model (III)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Linear Model (III) Rong Jin

  2. Announcement • Homework 2 is out and is due 02/05/2004 (next Tuesday) • Homework 1 is handed out

  3. Recap: Logistic Regression Model • Assume the inputs and outputs are related in the log linear function • Estimate weights: MLE approach

  4. Example: Text Classification • Input x: a binary vector • Each word is a different dimension • xi = 0 if the ith word does not appear in the document xi = 1 if it appears in the document • Output y: interesting document or not • +1: interesting • -1: uninteresting

  5. Example: Text Classification Doc 1 The purpose of the Lady Bird Johnson Wildflower Center is to educate people around the world, … Doc 2 Rain Bird is one of the leading irrigation manufacturers in the world, providingcomplete irrigation solutions for people…

  6. Example 2: Text Classification • Logistic regression model • Every term ti is assigned with a weight wi • Learning parameters: MLE approach • Need numerical solutions

  7. Example 2: Text Classification • Weight wi • wi > 0: term ti is a positive evidence • wi < 0: term ti is a negative evidence • wi = 0: term ti is irrelevant to whether or not the document is intesting • The larger the | wi |, the more important ti term is determining whether the document is interesting. • Threshold c

  8. Example 2: Text Classification • Dataset: Reuter-21578 • Classification accuracy • Naïve Bayes: 77% • Logistic regression: 88%

  9. Why Logistic Regression Works better for Text Classification? • Common words • Small weights in logistic regression • Large weights in naïve Bayes • Weight ~ p(w|+) – p(w|-) • Independence assumption • Naive Bayes assumes that each word is generated independently • Logistic regression is able to take into account of the correlation of words

  10. Comparison • Discriminative Model • Model P(y|x) directly • Model the decision boundary • Usually good performance • But • Slow convergence • Expensive computation • Sensitive to noise data • Generative Model • Model P(x|y) • Model the input patterns • Usually fast converge • Cheap computation • Robust to noise data • But • Usually performs worse

  11. Problems with Logistic Regression? How about words that only appears in one class?

  12. Overfitting Problem with Logistic Regression • Consider word t that only appears in one document d, and d is a positive document. Let w be its associated weight • Consider the derivative of l(Dtrain) with respect to w • w will be infinite !

  13. Solution: Regularization • Regularized log-likelihood • Large weights  small weights • Prevent weights from being too large • Small weights  zero • Sparse weights

  14. Why do We Need Sparse Solution? • Two types of solutions • Many non-zero weights but many of them are small • Only a small number of weights, and many of them are large • Occam’s Razor: the simpler the better • A simpler model that fits data unlikely to be coincidence • A complicated model that fit data might be coincidence • Smaller number of non-zero weights  less amount of evidence to consider  simpler model  case 2 is preferred

  15. Occam’s Razer

  16. Occam’s Razer: Power = 1

  17. Occam’s Razer: Power = 3

  18. Occam’s Razor: Power = 10

  19. Finding Optimal Solutions • Concave objective function • No local maximum • Many standard optimization algorithms work

  20. Predication Errors Preventing weights from being too large Gradient Ascent • Maximize the log-likelihood by iteratively adjusting the parameters in small increments • In each iteration, we adjust w in the direction that increases the log-likelihood (toward the gradient)

  21. Graphical Illustration No regularization case

  22. Using regularization Without regularization Iteration

  23. When should Stop? • The gradient ascent learning method converges when there is no incentive to move the parameters in any particular direction: • In many cases, it can be very tricky • Small first order derivative  close to the maximum point

More Related