10 likes | 175 Views
Current situation: Multiple computer vision components extract information about people working in our SmartControlRoom: Tracking and identification Body model and gestures Headpose and focus of attention Each component has its own visualization Goal:
E N D
Current situation: Multiple computer vision components extract information about people working in our SmartControlRoom: Tracking and identification Body model and gestures Headpose and focus of attention Each component has its own visualization Goal: Build framework that uses all provided information to create one integrated visual representation of the whole scene Requirements: Experience with visualization using, for example, openGL or VTK (we are open to suggestions) Very good C++ or python skills under Linux Prof. Dr.-Ing. Rainer Stiefelhagen Institut für Anthropomatik Forschungsbereich Maschinensehen für Mensch-Maschine Interaktion Fakultät für Informatik, Universität Karlsruhe (TH) Visualization for Human-Machine Interaction (Hiwi) Contact: Alexander Schick, 0721-6091-348, alexander.schick@iitb.fraunhofer.de