1 / 21

A Study of Smoothing Methods for Language Models Applied to Ad Hoc Information Retrieval

A Study of Smoothing Methods for Language Models Applied to Ad Hoc Information Retrieval. Chengxiang Zhai, John Lafferty School of Computer Science Carnegie Mellon University. Research Questions. General: What role is smoothing playing in the language modeling approach? Specific:

val
Download Presentation

A Study of Smoothing Methods for Language Models Applied to Ad Hoc Information Retrieval

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Study of Smoothing Methods for Language Models Applied to Ad Hoc Information Retrieval Chengxiang Zhai, John Lafferty School of Computer Science Carnegie Mellon University

  2. Research Questions • General: What role is smoothing playing in the language modeling approach? • Specific: • Is the good performance due to smoothing? • How sensitive is retrieval performance to smoothing? • Which smoothing method is the best? • How do we set smoothing parameters?

  3. Outline • A General Smoothing Scheme and TF-IDF weighting • Three Smoothing Methods • Experiments and Results

  4. Document language model Retrieval as Language Model Estimation • Document ranking based on query likelihood(Ponte & Croft 98, Miller et al. 99, Berger & Lafferty 99, Hiemstra 2000, etc.) • Retrieval problem  Estimation of p(wi|d)

  5. Why Smoothing? • Zero probability • If w does not occur in d, then p(w|d) =0, and any query with word w will have a zero probability. • Estimation inaccuracy • A document is a very small sample of words, and the maximum likelihood estimate will be inaccurate.

  6. P(w) Max. Likelihood Estimate Smoothed LM (linear interpolation) w Language Model Smoothing (Illustration)

  7. Discounted ML estimate Collection language model A General Smoothing Scheme • All smoothing methods try to • discount the probability of words seen in a document • re-allocate the extra probability so that unseen words will have a non-zero probability • Most use a reference model (collection language model) to discriminate unseen words

  8. Doc length normalization (long doc is expected to have a smaller d) TF weighting IDFweighting Ignore for ranking Smoothing & TF-IDF Weighting • Plug in the general smoothing scheme to the query likelihood retrieval formula, we obtain • Smoothing with p(w|C) TF-IDF + length norm.

  9. Three Smoothing Methods • Simplified Jelinek-Mercer: Shrink uniformly toward p(w|C) • Dirichlet prior (Bayesian): Assume pseudo countsp(w|C) • Absolute discounting: Subtract a constant

  10. Queries Collections Disk4 & 5 -CR (~2GB) TREC351 – 400 (Title+ Long) FBIS FT LA 18 combinations TREC401 – 450 (Title + Long) TREC8 Small WEB (~2GB) Experiments

  11. Results • Performance is sensitive to smoothing • Type of queries makes a difference! • More smoothing is needed for long queries than title queries • Precision is more sensitive to smoothing for long queries • Dirichlet prior is the best for title queries • Jelinek-Mercer is most sensitive to the length/type of queries

  12. Avg. precision Optimal parameter settings 1.0 Optimal range More smoothing Smoothing parameter (e.g., , , or ) Figure Explanation

  13. Optimal  Title query Optimal  Long query Title queries vs. Long queries(Jelinek-Mercer on FBIS, FT, and LA)

  14. Per-query Optimal range of (JM on Trec8) Long queries Title queries wide range flat curve less sensitive narrow range peaked curve moresensitive

  15. optimal more smoothing flatter More on Precision Sensitivity Absolute Discounting Dirichlet Prior Small DB Large DB

  16. Comparison of Three Methods

  17. A Possible Explanation of Observations • The Dual Role of Smoothing • Estimation role: Accurate estimation of p(w|d) • Query modeling role: Generation of common/non-informative words in query • Title queries have no (few) non-informative words, so • Performance is affected primarily by the estimation role of smoothing only • They need less smoothing

  18. A Possible Explanation (cont.) • Long queries have more non-informative words, so • Performance is affected by both roles of smoothing • They need more smoothing (extra smoothing is for query modeling) • Dirichlet is best for title queries, because it is good for playing the estimation role • JM performs not so well on title queries, but much better on long queries, because it is good for playing the query modeling role, but not so good for the estimation role.

  19. The Lemur Toolkit • Language Modeling and Information Retrieval Toolkit • Under development at CMU and Umass • All experiments reported here were run using Lemur • http://www.cs.cmu.edu/~lemur • Contact us if you are interested in using it

  20. Conclusions and Future Work • Smoothing  TF-IDF + doc length normalization • Retrieval performance is sensitive to smoothing • Sensitivity depends on query type • More sensitive for long queries than for title queries • More smoothing is needed for long queries • All three methods can perform well when optimized • Dirichlet prior is especially good for title queries • Both Dirichlet prior and JM are good for long queries • Absolute discounting has a relatively stable optimal setting

  21. Conclusions and Future Work (cont.) • Smoothing plays two different roles • Better estimation of p(w|d) • Generation of common/non-informative words in query • Future work • More evaluation (types of queries, smoothing methods) • De-couple the dual role of smoothing (e.g., two-stage smoothing strategy) • Train query-specific smoothing parameters with past relevance judgments and other data (e.g., position selection translation model)

More Related