260 likes | 374 Views
Affordance Prediction via Learned Object Attributes. Tucker Hermans James M. Rehg Aaron Bobick Computational Perception Lab School of Interactive Computing Georgia Institute of Technology. Motivation. Determine applicable actions for an object of interest
E N D
Affordance Prediction via Learned Object Attributes Tucker Hermans James M. Rehg Aaron Bobick Computational Perception Lab School of Interactive Computing Georgia Institute of Technology
Motivation • Determine applicable actions for an object of interest • Learn this ability for previously unseen objects
Affordances • Latent actions available in the environment • Joint function of the agent and object • Proposed by Gibson 1977
Direct Perception Direct Perception Model • Affordances are directly perceived from the environment • Gibson’s original model of affordance perception
Object Models Category Affordance Full Category Affordance Chain Moore, Sun, Bobick, & Rehg, IJRR 2010
Attribute Affordance Model Benefits of Attributes Attribute-Affordance Model • Attributes determine affordances • Scale to novel object categories • Give a supervisory signal not present in feature selection
Attribute Affordance Model Based on Lampert et. al. CVPR 09
Visual Features SIFT codewords extracted densely Texton filter bank LAB color histogram …
Attributes • Shape: 2D-Boxy, 3D-Boxy, cylindrical, spherical • Colors: blue, red, yellow, purple, green, orange, black, white, and gray • Material: cloth, ceramic, metal, paper, plastic, rubber, and wood • Size: height and width (cm) • Weight (kg) • Total attribute feature length: 23 total elements
Attribute Classifiers • Learn attribute classifiers using binary SVM and SVM regression • Use multichannel χ2 kernel
Affordance Classifiers • Binary SVM with multichannel Euclidean and hamming distance kernel • Train on ground truth attribute values • Infer affordance using predicted attribute values
Experimental Data • Six object categories: balls, books, boxes, containers, shoes, and towels • 7 Affordances: rollable, pushable, gripable, liftable, traversable, caryable, dragable • 375 total images
Results: Affordance Prediction Category Affordance Chain Attribute-Affordance
Results: Affordance Prediction Attribute-Affordance Category Affordance Full
Results: Affordance Prediction Attribute-Affordance Direct Perception
Results: Affordance Prediction Percent correctly classified
Results: Attribute Prediction Color Prediction Material Prediction
Results: Attribute Prediction Shape Prediction Object Category Prediction
Results: Novel Object Class Attribute-Affordance Direct Perception Object class “book”
Results: Novel Object Class Attribute-Affordance Direct Perception Object class “box”
Results: Novel Object Class Percent of correctly classified affordances across all novel object categories
Future Work Attribute-Category Model • Train attribute classifiers on larger auxiliary dataset • Incorporate depth sensing • Combine attribute and object models • Use parts as well as attributes • Affordances of elements other than individual objects
Conclusion • Current dataset does not provide a diverse enough set of object classes for attributes to provide significant information transfer • Attribute model restricts use of all features, unlike direct perception which has all visual features available • Attribute model outperformed object models • Direct perception and attribute models are comparable for small amounts of training data
Affordance Prediction via Learned Object Attributes Tucker Hermans James M. Rehg Aaron Bobick Computational Perception Lab School of Interactive Computing Georgia Institute of Technology