80 likes | 236 Views
Ontology ranking. What is ontology ranking? Selecting ontologies Evaluating ontologies When ranking algorithm is developed Evaluation of the algorithm Our specific case OntoFinder /Factory Input: Set of relevant terms Output: set of ontologies
E N D
Ontology ranking • What is ontology ranking? • Selecting ontologies • Evaluating ontologies • When ranking algorithm is developed • Evaluation of the algorithm • Our specific case • OntoFinder/Factory • Input: Set of relevant terms • Output: set of ontologies • Rank the ontologies according to input terms.
Park • Relation Match Measure (RMM) defined as a combination of: • Concept match (exact match, partial match, synonymous match) • Relation label match: degree of correspondence between relation between search terms, and relation between concepts matched by search terms. • Distance: minimum path length between concepts (direct match = directly connected). • Neighbourmatch: can domain and range concept be connected with the help of their neighbour nodes in addition to their original linka?
Martinez-Romero • Martinez-Romero et al.: • Expansion of input terms with WordNet, UMLS. • Weights for each metric was recommended by experts. • Previous approaches have 4 main drawbacks: 1. not completely automatic, 2. input is restricted to a single word, 3. popularity is not considered or not correctly assessed. 4. Semantics from relations in ontologies are ignored. • Three metrics again (coverage, richness, popularity) • No word disambiguation (compared to AKTivRank).
OntoFinder Challenges • Fact: No “perfect algorithm” for evaluation/ranking. • Which metrics to use and how to weight them? • Metrics and weights from previous works? • Define new/improve old metrics: • Coverage - Improve string similarity? • Until now: exact match, partial match, synonyms, longest only, edit distance, n-Grams (?). • We: Head word and Other string similarity measurements • Add different metrics for popularity? • Number of Views on Bio Portal? • PubMed references? • How to evaluate results - algorithm? • Human evaluators? • Automatic evaluation method? (Brank et al. 2006) • Ontology based annotation: • Select ontologies -> Annotate -> Compare result with “golden standard” (CRAFT?) – How? • ML approach (?)