1 / 32

Disambiguation of Biomedical Text

Disambiguation of Biomedical Text. Mark Stevenson Natural Language Processing Group University of Sheffield, UK http://www.dcs.shef.ac.uk/~marks Joint work with: Yikun Guo and Robert Gaizauskas (University of Sheffield) and David Martinez (University of Melbourne). Outline.

ownah
Download Presentation

Disambiguation of Biomedical Text

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Disambiguation of Biomedical Text Mark Stevenson Natural Language Processing GroupUniversity of Sheffield, UK http://www.dcs.shef.ac.uk/~marks Joint work with: Yikun Guo and Robert Gaizauskas (University of Sheffield) and David Martinez (University of Melbourne)

  2. Outline • Ambiguity in biomedical documents • Disambiguation • Knowledge sources • Evaluation • Semi-supervised acquisition of additional training data

  3. Text in Biomedical Domain • The literature on biomedicine and the life sciences is vast and growing rapidly • Promising domain for text processing • Search engines necessary • Opportunities for knowledge discovery

  4. Ambiguity • Lexical ambiguity makes text processing more difficult • Generally believed that ambiguities do not occur with domains • One Sense per Discourse (Gale, Church and Yarowsky, 1992) • “there is a very strong tendency (98%) for multiple uses of a word to share the same sense in a well-written discourse”

  5. cell

  6. culture “In peripheral blood mononuclear cell culture streptococcal erythrogenic toxins are able to stimulate tryptophan degradation in humans.” International Allergy Immunology “The aim of this paper is to describe the origins, initial steps and strategy, current progress and main accomplishments of introducing a quality management culture within the healthcare system in Poland.” International Journal of Qualitative Health Care

  7. Extent of Ambiguity Problem • Weeber et. al. (2001) • Estimated that 11.7% of the phrases in abstracts added to MEDLINE in 1998 were ambiguous • Ambiguity is biggest challenge in automation of indexing MEDLINE and a hindrance to automated knowledge discovery (Weeber et. al. 2001)(Nadkarin et. al. 2001)(Aronson 2001)

  8. WSD System • Supervised learning approach • Extension of Basque Country University’s Senseval-3 system (Agirre and Martinez, 2004) • Combines range of knowledge sources • Previous work shown that combining knowledge sources is an effective approach to WSD

  9. Features • General • Wide range of features which are commonly used by WSD systems • Domain specific • Two knowledge sources specific to biomedical domain

  10. Example • “Body surface area adjustments of initial heparin dosing …” • Individual Adjustment “By the fast (2.5mph) ambulation trial, both groups were performing equally, suggesting a rapid rate of adjustment to the device.” • Adjustment Action “Clinically, these four patients had mild symptoms which improved with dietary adjustment.” • Psychological adjustment “Predictors of patients' mental adjustment to cancer: patient characteristics and social support.”

  11. General Features (1) • Local collocations • Bigrams and trigrams containing ambiguous word constructed from lemmas, word forms and PoS tags • left-content-word-lemma “area adjustment” • right-function-word-lemma “adjustment of'' • left-POS “NN NNS” • right-POS “NNS IN” • left-content-word-form “area adjustments” • right-function-word-form “adjustment of” • First noun, verb, adjective and adverb preceding and following ambiguous word (lemma and word form)

  12. General Features (2) • Syntactic dependencies • Five relations: subject, object, noun-modifier, preposition and sibling • Salient bigrams • Salient bigrams in abstract • Unigrams • Lemmas of all content words in the abstract and 8 word window around target word • Lemmas of unigrams which appear frequently in entire corpus

  13. Concept Unique Identifiers (CUIs) • CUIs refer to UMLS concepts • MetaMap segments text and identifies possible CUIs for each phrase "Body surface area adjustments" C0005902:Body Surface Area [Diagnostic Procedure] C1261466:Body surface area [Organism Attribute] C0456081:Adjustments (Adjustment Action) [Health Care Activity] C0376209:Adjustments (Individual Adjustment) [Individual Behavior] "of initial heparin dosing" C0205265:Initial (Initially) [Temporal Concept] C1555582:initial [Idea or Concept] C0019134:Heparin [Biologically Active Substance,Carbohydrate]

  14. Medical Subject Headings (MeSH) • Controlled vocabulary for indexing life science publications • Contains over 24,000 headings organised into an 11 level hierarchy • Use MeSH terms assigned to abstract containing ambiguous term M01.060.116.100: “Aged” M01.060.116.100.080: “Aged, 80 and over” D27.505.954.502.119: “Anticoagulants” G09.188.261.560.150: “Blood Coagulation”

  15. Learning Algorithms • Vector Space Model • Simple memory-based learning algorithm • Naïve Bayes • Support Vector Machine • Weka implementations

  16. NLM-WSD data set • Standard evaluation corpus for WSD in biomedical domain (“Biomedical SemEval”) • Contains highly 50 ambiguous terms frequently found in Medline • 100 instances of each term manually disambiguated with UMLS concepts by a team of annotators • Baseline (MFS) accuracy of 78% • Average of 2.64 possible meanings per term

  17. Results • Combination of linguistic features with MeSH terms significantly better than any features used alone • VSM significantly better than other learning algorithms

  18. Joshi et. al. (2005) Leroy and Rindflesch (2005) Liu et. al. (2004) Common cold depression discharge extraction fat implantation japanese lead mole pathology reduction sex ultrasound adjustment blood pressure evaluation immunosuppression radiation sensitivity degree growth man mosaic nutrition repair scale weight white Dominant sense < 90% Removed low IAA Dominant sense < 65% association condition culture determination energy failure fit fluide frequency ganglion glucose inhibition pressure resistance secretion single strains support surgery transient transport variation

  19. Automatic Example Generation • Various approaches to generating sense tagged examples without the need for manual annotation • Monosemous relatives (Leacock et. al. 1998) • Translations as sense definitions (Ng et. al. 2003) • All unsupervised but require external knowledge sources (e.g. WordNet or parallel text) • Alternative semi-supervised approach

  20. Relevance Feedback • Method for improving search results based on analysis of retrieved documents ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ • > ~~~~~~~~~~ ~~~~~~~~~~ • > • > • Modified • Query ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ • Query ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~ • Retrieved • documents • Relevance • judgements

  21. Common approach to relevance feedback for vector space model (Rocchio, 1971) qm = modified query vector q = original query vector D+q = set of vectors representing known relevant documents D-q = set of vectors representing known irrelevant documents α,β,γ = weights

  22. Acquiring Sense Tagged Examples • Treat set of sense tagged examples as retrieved documents • Examples tagged with sense considered relevant, all other examples considered irrelevant • For each sense, identify additional query terms which tend to discriminate examples tagged with that sense from those tagged with other senses • Search for documents matching this extended query

  23. Identifying Query Terms • Compute score for each term in the sense-tagged documents against each sense count(t,d) = frequency of term t in document d D+s = set of examples of target sense D-s = set of examples of other senses α,β = weights idf(t) = inverse document frequency of t

  24. Terms for two senses of “culture”

  25. Example Collection • Identify examples by querying Medline via online interface • Preserve bias in original sense distribution • For example, if 75% usages are ‘laboratory culture’ and 25% ‘anthropological culture’ then ensure same 75:25 split in retrieved examples • Use eight highest scoring terms (score(t,s)) for each sense • Relax queries until enough examples can be retrieved: culture AND (suggest AND protein AND presence) culture AND ((suggest AND protein) OR (suggest AND presence) OR (protein and presence)) culture AND (suggest OR protein OR presence)

  26. Experiments • 10-fold cross validation • Training portion (90 examples) analysed to generate additional examples • Generated three sets for each term: 90, 180, 270 and 360 examples • Combine automatically generated examples with training portion (+90, +180, +270, +360) • Automatically generated examples alone (90, 180, 270, 360)

  27. Performance

  28. Individual Terms

  29. Conclusion • Ambiguity real problem in biomedical domain • Domain specific knowledge improves WSD performance • Relevance feedback can be used to acquire additional training examples and further improve performance

  30. More Information • http://nlp.shef.ac.uk/BioWSD/ • This work has been funded EPSRC grants BioWSD and CASTLE

More Related