1 / 22

Identifying Publication Types Using Machine Learning BioASQ Challenge Workshop

Identifying Publication Types Using Machine Learning. Identifying Publication Types Using Machine Learning BioASQ Challenge Workshop. A. Jimeno Yepes, J.G. Mork, A. R. Aronson. Publication Types. Define the genre of the article, e.g. Review

wilson
Download Presentation

Identifying Publication Types Using Machine Learning BioASQ Challenge Workshop

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Identifying Publication Types Using Machine Learning Identifying Publication Types Using Machine Learning BioASQ Challenge Workshop A. Jimeno Yepes, J.G. Mork, A. R. Aronson

  2. Publication Types • Define the genre of the article, e.g. Review • Special type of MeSH Heading that are used to indicate what an article is rather than what it is about • Citations can be indexed with more than one PT • There are 61 PTs identified in the four MeSH Publication Characteristics (V) Tree top-level sub-trees that the indexers typically use 2

  3. Publication Type: Review Example This review attempts to highlight ... PMID: 24024204 (September 12, 2013) 3

  4. Publication Types • PubMed allows for queries including publication type fields, e.g. Review[pt] • PTs are available in the MEDLINE citation XML and ASCII formats <PublicationTypeList> <PublicationType>Clinical Trial, Phase II</PublicationType> <PublicationType>Journal Article</PublicationType> <PublicationType>Randomized Controlled Trial</PublicationType> <PublicationType>Research Support, N.I.H., Extramural</PublicationType> <PublicationType>Research Support, Non-U.S. Gov't</PublicationType> </PublicationTypeList> 4

  5. Motivation • Indexing of citations with Publication Type (PT) as part of the Indexing Initiative at the US NLM • Recommend PTs as part of the MTI (Medical Text Indexer) support tool • MTI performed poorly on PTs in previous attempts and stopped suggesting PTs altogether on November 10, 2004 5

  6. MTI in a nutshell 6

  7. Machine learning motivation • MTI showed poor results in Publication Type (PT) indexing in previous work • Indexing of PTs can be seen as a text categorization task • We have considered as a binary case. For a given PT the citations indexed with it are considered as positives and the rest as negative 7

  8. Data set development • Over time the indexing policy changes, consider the most recent indexing • Selected citations Date Completed (date indexing was applied to the citation) ranging from January 1, 2009 to December 31, 2011 8

  9. Data set development • Data set obtained from the 2012 MEDLINE Baseline Repository (MBR) Query Tool • http://mbr.nlm.nih.gov • MBR allows us to randomly divide the list of PMIDs into Training (2/3) and Testing (1/3) sets • 1,784,061 randomly selected PMIDs for Training and 878,718 for Testing 9

  10. Data set development • Filter out articles requiring special handling • OLDMEDLINE, PubMed-not-MEDLINE, articles with no indexing, CommentOn, RetractionOf, PartialRetractionOf, UpdateIn, RepublishedIn, ErratumFor, and ReprintOf. • Final data set: • 1,321,512 articles for Training and 651,617 articles for Testing 10

  11. Test set statistics • Citations in test set: 651,617 • Imbalance between positives and negatives 11

  12. Machine learning algorithms • MTI ML: • Support Vector Machine • Stochastic Gradient Descent based on Hinge Loss (Sgd) • Modified Huber Loss (Yeganova et al, 2011) (Mhl) • AdaBoostM1 (C4.5 as based method) (Ada) • Mallet: • Naïve Bayes (NB) and Logistic Regression (LR) 12

  13. Features • Title and abstract text (Base) • Base + Journal Unique Identifier, Author affiliations, Author Names, and Grant Agencies (additional features) (F) • Base + bigrams (B) • Base + additional features + bigrams (BF) • AdaBoostM1 was not trained with bigrams due to time constraints 13

  14. Results (F1 measure) 14

  15. Methods/features comparison • No clear winning method that works best for all of the Publication Types, echoing the findings for MeSH indexing • Logistic Regression provides the highest F1 measures for six of the ten PTs in our study • Bigrams and additional features tend to perform better than using just title and abstract tokens 15

  16. Naïve Bayes performance • Naïve Bayes is far behind all of the other methods • This effect already known (Rennie et al. 2003) is more dramatic when there is an imbalance between the classes • This effect is more dramatic with a larger set of dependent features 16

  17. ML performance indexing PTs • Case Reports, Congresses, English Abstract, Meta-Analysis, Randomized Controlled Trial, and Review all have F1 measures above 0.7 making them promising candidates for future integration into the indexing process • The remaining PTs Clinical Trial, Controlled Clinical Trial, Editorial, and In Vitro all have F1 measures too low for consideration at this time but provide the kernel for further research into improving their performance 17

  18. English Abstract PT • ML already high performance (F1: 0.8359) • Indexing rule already in place: if an article has a title in brackets (meaning it was translated into English) and contains an abstract, it should receive the English Abstract Publication Type • This PT is already automatically assigned using this rule and ML algorithms need to add more features explicitly 18

  19. In Vitro PT • In Vitro is one of the low performing terms • In our error analysis, we find that in almost all of the false negatives that we manually reviewed, the information for designating the article as In Vitro was located in the Methods section of the full text of the article 19

  20. Conclusions • Evaluated the automatic assignment of PTs to MEDLINE articles based on machine learning • For the majority (6 of 10) of PTs the performance is quite good with F1 measures above 0.7 20

  21. Conclusions • In addition to the title and abstract text, further information provided from fields in the MEDLINE article result in improved performance • Extend current work to include most of the remaining frequently used PTs and exploring the use of openly available full text from PubMed Central to see the impact in terms like In Vitro 21

  22. Questions? • MTI ML package • http://ii.nlm.nih.gov/MTI_ML/index.shtml • Publication Types data set • http://ii.nlm.nih.gov/DataSets/index.shtml#2013_BioASQ 22

More Related