380 likes | 559 Views
User-Centric Design of a Vision System for Interactive Applications. Stanislaw Borkowski, Julien Letessier, François Bérard, and James L. Crowley. ICVS’06 New York, NY, USA January 5, 2006. Academic context. PRIMA group (GRAVIR lab, INRIA)
E N D
User-Centric Designof a Vision System forInteractive Applications • Stanislaw Borkowski, Julien Letessier, François Bérard, and James L. Crowley ICVS’06 New York, NY, USA January 5, 2006
Academic context • PRIMA group (GRAVIR lab, INRIA) • « Perception, Recognition and Integration for Interactive Environments » • IIHM group (CLIPS lab, Univ. Grenoble) • « Engineering in Human-Computer Interactions »
Outline • Context: augmented surfaces • User-centric approach in vision systems • User-centric requirements • Implementation • SPODs, VEIL, Support services • Conclusions & Future Work
Context : augmented surfaces • Interacting withprojected images... • direct manipulation • user collaboration • mobility • ... is not realistic today • limited, controlled conditions • operator requirement • software integration issues credit: F. Bérard, J. Letessier
Objectives • Propose a client-centric approach • design of perceptive input systems • two classes of clients : • end users realize an interaction task • developers create an interactive application • Application : design an input system • address simple augmented surfaces • feature vision-based, WIMP-like widgets(e.g. press-buttons) • acheive the usability of a physical input device
Approach Overview • Top-down design • Determine client requirements • consequences of HCI and SOA requirements • user-centric / developer-centric • functional / non-functional • Service-oriented • def : a service adds value to information • SOA is a collection of communicating services
Developer requirements • Abstraction : be relevant • make computer vision invisible • generalize the input • Isolation : allow integration • permit service distribution • support remote access to services • offer code reuse • Contract : offer quality of service • specify usage conditions • determine service latency, precision, etc.
End-user requirements • Typical for "real time" interaction • Latency limits • upper bound : 50 ms for coupled interaction • lower bound : 1 s for monitoring applications • Autonomy • ideally, no setup or maintenance • in practice, minimize task disruption • Reliability / predictability • either real-time or unusable • reproducible user experience
Pragmatic approach • Black-box services • BIP(Basic Interconnection Protocol) • BIP implementation ≈ SOA middleware • service/service and service/application comm. • goal 1 : performance • connection-oriented (TCP-based) • low latency (UDP extensions) • goal 2 : easy integration • service discovery (standards-based) • implementations provided (C++, Java, Tcl) • interoperability ≤ 100 lines of code
Our approach • Abstraction, Isolation : use BIP • advice to service developers • Contract : nothing enforced • recommend evaluation of hci-centric criteria • Common ground • allows to create SOA-based prototypes
Luminance-based button widget S. Borkowski, J. Letessier, and J. L. Crowley. Spatial Control of Interactive Surfaces in anAugmented Environment. In Proceedings of the EHCI’04. Springer, 2004.
Touch detection • Locate widget in the camera image • Calculate mean luminance over the widget • Update the state widget state
Gain x Striplet – the occlusion detector x y
Striplet-based SPOD SPOD – Simple-Pattern Occlusion Detector
VEIL S P O D Striplets Engine SPOD software components GUI rendering Client Application Calibration Camera GUI
VEIL – Vision Events Interpretation Layer Inputs • Widgets coordinates • Scale and UI to camera mapping matrix • Striplets occlusion events Outputs • Interaction events • Striplets coordinates VEIL S P O D Striplets Engine
VEIL – Vision Events Interpretation Layer Inputs • Widgets coordinates • Scale and UI to camera mapping matrix • Striplets occlusion events Outputs • Interaction events • Striplets coordinates VEIL S P O D Striplets Engine
VEIL – Vision Events Interpretation Layer Inputs • Widgets coordinates • Scale and UI to camera mapping matrix • Striplets occlusion events Outputs • Interaction events • Striplets coordinates VEIL S P O D Striplets Engine
Striplets Engine Service Inputs • Striplets UI-coordinates • UI to camera mapping matrix • Images from camera service Outputs • Occlusion events VEIL S P O D Striplets Engine
Striplets Engine Service Inputs • Striplets UI-coordinates • UI to camera mapping matrix • Images from camera service Outputs • Occlusion events VEIL S P O D Striplets Engine
Striplets Engine Service Inputs • Striplets UI-coordinates • UI to camera mapping matrix • Images from camera service Outputs • Occlusion events VEIL S P O D Striplets Engine
VEIL – Vision Events Interpretation Layer Inputs • Widgets coordinates • Scale and UI to camera mapping matrix • Striplets occlusion events Outputs • Interaction events • Striplets coordinates VEIL S P O D Striplets Engine
SPOD-based calculator Video available at: http://www-prima.inrialpes.fr
Conclusions • We have presented • Service-oriented approach • Implementation • Future work • Different detector types • More intelligent VEIL • Integration to GML