200 likes | 274 Views
Integrating Active Tangible Devices with a Synthetic Environment for Collaborative Engineering. Sandy Ressler Brian Antonishek Qiming Wang Afzal Godil. National Institute of Standards and Technology. Jared Freeland DAS FA CIS 4930. Abstract of the Abstract.
E N D
Integrating Active Tangible Devices with a Synthetic Environment for Collaborative Engineering Sandy Ressler Brian Antonishek Qiming Wang Afzal Godil National Institute of Standards and Technology Jared FreelandDAS FACIS 4930
Abstract of the Abstract • This paper describes the creation of an environment for collaborative engineering in which the goal is to improve the user interface by using haptic manipulation with synthetic environments. • The system to be outlined combines some of what Dr. Fishwick discussed on Wednesday, as well as Scott’s presentation on “Real Reality”.
Introduction • The immediate goal: to determine the feasibility of using a tangible interface with a multiuser VRML environment as applied to collaborative engineering. • By “tangible” we refer to the ability to pick up and interact with actual physical objects represented in the virtual environment
Introduction • A secondary goal of the project was to use as much off-the-shelf software and hardware as possible, to facilitate transfer of the technology into the commercial world. • The mediation hub is run by Java • The VE uses a commercial system, the blaxxun Community Platform • The tangible devices are off-the-shelf configurable robots by LEGO Mindstorms
System Overview • The overall environment is conceptually simple. • Two collaborating engineers in geographically separate areas wish to manipulate and discuss a construction project • Recent work at NIST has demonstrated that VRML can be used to represent rich construction environments, but manipulation of elements such as a virtual excavator is awkward.
System Overview • Control panels with many buttons and sliders are functional but can be difficult to manipulate. • The answer: direct haptic manipulation should be more intuitive for interaction. • Users move the tangible excavator and adjust the rotatable arm, causing the virtual “mirror” to update.
System Overview • The core of the system is the Java based Virtual Environment Device Integrationserver, or JVEDI, which acts as a hub between all the system’s components. • The server runs as a stand-alone Java application on the host computer.
System Overview The Real Environment • Two work surfaces (A.K.A. tables) • A LEGO Mindstorm robot on each surface • Above each surface is a video camera looking down at the surface, providing 2D position/orientation
System Overview The Virtual Environment • A multi-user blaxxun environment displays the sum of both (or all) physical environments • A simplified user interface consisting of buttons and arrows is included for collaborators without access to an actual robot
Interesting Points of the System • Unlike “graspable” interfaces, this system does not use a haptic glove or data glove of any kind. • Instead, by using a camera to track the movement of robots, the user is given complete unrestricted control of the robots
Interesting Points of the System • The virtual and real environments are kept synchronized. They always mirror each other. • This is accomplished by always using position values reported by the video system.
Integration Issues • The most challenging aspect in creating the work environment was integrating all the processing elements. • Functionality for controlling robots and reading and writing data from a position tracking device had to be built for VRML’s External Authoring Interface. • A fully configured version of the environment requires up to six separate computers.
Major Components Vision Processing • The position and orientation of the LEGO robots are computed in real time using a computer vision method based on color tracking. • The vision program uses an inexpensive camera and can track multiple robots at 10 frames/sec
Major Components Vision Processing • To track the LEGO robots, two differently colored cards were attached to the robot. • The computer vision program uses probability distribution to find the centers of the two squares, the mean of which is the robot’s position. • The orientation is the arctangent of the difference of the centers
Major Components Speech Input • A user who is moving robots cannot easily access a keyboard. • It can also be necessary to move robots on two surfaces simultaneously. • Voice commands were built in such as “forward”, “backward”, “left”, “right”, “select red”, “select blue”, etc.
Major Components Multiuser VRML • The multiuser aspect of the VRML world was accomplished with commercially available software from blaxxun Community. • The first part is the virtual world which includes the two lego robots (red and blue), the floor, and a cylinder. • The second part is the control panel
Auxiliary Processing Collision Detection • Suppose the robots on two separate work surfaces collide virtually • The collision detection is performed by the VRML world, and knowledge of it exists only in the VE. • The robots light up and beep when they hit something in the virtual world.
Auxiliary Processing Task Commands and Recording • Small programmatic tasks were created for the robots, such as movement patterns. • Additionally, functionality was added that allows users to record the movements of the robots, to be played back and repeated later.
Links • The JVEDI code is publicly available athttp://ovrt.nist.gov/jvedi • More on robots at legomindstorms.com • More on the blaxxun Community at blaxxun.com
Discussion How does this project relate or compare to what Scott discussed Wednesday? What are the advantages/disadvantages? What can and needs to be improved?