10 likes | 119 Views
Learning to Rank using High-Order Information. Puneet K. Dokania 1 , A. Behl 2 , C. V. Jawahar 2 , M. Pawan Kumar 1. 1 Ecole Centrale Paris and INRIA Saclay - France, 2 IIIT Hyderabad - India. Aim. Results. HOB-SVM. Optimizing Average Precision (Ranking)
E N D
Learning to Rank using High-Order Information Puneet K. Dokania1, A. Behl2, C. V. Jawahar2, M. PawanKumar1 1Ecole Centrale Paris and INRIA Saclay - France, 2IIIT Hyderabad - India Aim Results HOB-SVM • Optimizing Average Precision (Ranking) • Incorporating High-Order Information • Incorporate High-order information • Optimizes Decomposable loss • Action Classification • Problem Formulation: Given an image and a bounding box in the image, predict the action being performed in the bounding box. • Dataset- PASCAL VOC 2011,10 action classes,, 4846 images(2424 ‘trainval’ + 2422 ‘test’ images). • Features: POSELET + GIST • High-Order Information: “Persons in the same are likely to perform same action”. Connected bounding boxes belonging to the same image. • Encoding high-order information (joint feature map): Motivations and Challenges • For example, persons in the same image are likely to have same action • High-Order Information • Results • Parameter Learning: High Order + Ranking -> No Method • Average Precision Optimization • AP is the most commonly used evaluation metric • AP loss depends on the ranking of the samples • Optimizing 0-1 loss may lead to suboptimal AP Action inside the bounding box ? Use Max-marginals Context helps Ranking ?? Single Score • Ranking: • Sort difference of max-marginal scores to get ranking: AP = 1 Accuracy = 1 • Max-marginals capture high-order information AP = 0.55 Accuracy = 1 • Optimization: Notations • Use dynamic graph cut for fast computation of max-marginals • Convex AP doesn’t decompose • Samples: • Labels: • Set of positive samples: HOAP-SVM • Set of negative samples: • Ranking Matrix: • Optimizes AP based loss • Incorporate high-order information SVM • Encode ranking and high-order information (AP-SVM + HOB-SVM): • Paired ttest: AP-SVM • HOB-SVM better than SVM in 6 action classes • HOB-SVM not better than AP-SVM • HOAP-SVM better than SVM in 6 action classes • HOAP-SVM better than AP-SVM in 4 actions classes • Loss function: HOB-SVM AP-SVM Sample scores similar to HOB-SVM (max-marginals) Joint score similar to AP-SVM HOAP-SVM • Optimizes AP (measure of ranking) Conclusions • Parameter Learning • Key Idea: Uses SSVM to encode ranking (joint score): • No High-Order Information • Ranking: Sort scores • Parameter Learning • Ranking: Sort scores, • Optimization • Non Convex - > Difference of Convex -> CCCP • Optimization • Convex • Cutting plane -> Most violated constraint (greedy) -> O(|P||N|) Code and Data: http://cvn.ecp.fr/projects/ranking-highorder/ • Dynamic graph cut for fast upper bound