340 likes | 560 Views
3D Indoor Positioning System Design Review Presentation SD May 11-17. Faculty Advisor: Dr. Daji Qiao. Members: Nicholas Allendorf - CprE Christopher Daly – CprE Daniel Guilliams – CprE Andrew Joseph – EE Adam Schuster – CprE. Client: Dr. Stephen Gilbert
E N D
3D Indoor Positioning SystemDesign Review PresentationSD May 11-17 Faculty Advisor: Dr. DajiQiao Members: Nicholas Allendorf - CprE Christopher Daly – CprE Daniel Guilliams – CprE Andrew Joseph – EE Adam Schuster – CprE Client: Dr. Stephen Gilbert Virtual Reality Application Center
Problem Statement • Currently, there is no inexpensive system that is able to accurately localize and track fingertips • We are concerned with small scale and centimeter level accuracy, unlike GPS • Such a system could be used as an input device/controller to a computer system • It could be used for virtual reality systems, touch tables, or a “Minority Report”-style user interface Slide of 33
Project Goal • Create a system capable of accurately tracking fingertips in three dimensions • Incorporate the ability to support many users simultaneously • Design the system so that it is easily reproducible Slide of 33
Functional Requirements • Provide a 3D position of all tracked fingertips within a 2m x 2m x 2m indoor region with 1 centimeter accuracy • Update positions 15 times per second (15 Hz) • The system shall be capable of tracking as many as 60 object positions simultaneously • Positions shall be displayed in a graphical interface so the position may be viewed in real time Slide of 33
Non-Functional Requirements • The device used by the potential user shall be small, lightweight and durable • The device shall be able to go three weeks without requiring any sort of recharging • The tracking infrastructure surrounding shall be easy to set up • The system shall be reproducible with consistent quality so the tracking system may be moved to different locations when required Slide of 33
Constraints • Small device size limits choice of technology • Need for battery life forces much of the work to be done by the infrastructure • Want the system to be as non-intrusive as possible • Some part of the device must be uniquely identifiable to software Slide of 33
Risks & Mitigations • Requirements are challenging • Need to design the system such that there is “room to give”, thus allowing for quick changes to the system • Lack of relevant experience • Perform research on potential technologies • Make informed and reasonable decisions based on market and technology research • Utilize VRAC faculty and grad students; many have experience with OpenCV and tracking • Potential to fall behind schedule • Addressed through hard and intelligent work. Slide of 33
Market Survey • There are several systems available that perform gesture/pose recognition, but not localization • There are also several systems that provide localization, but are not wireless, do not track finger movement, require holding a device, etc. • Playstation Move/Nintendo Wii • Our project in unique in that it will be wireless with accurate absolute fingertip localization and no hand held device Slide of 33
Considered Technologies Slide of 33
Range and Accuracy of Current Positioning Technologies Source: IPIN Website http://www.geometh.ethz.ch/ipin/index/IPIN_Opening_Session.pdf Slide of 33
Choice of Technology • Optical/Infrared tracking • Most practical solution • Accuracy is a function of camera resolution • No need to develop custom hardware • Existing IR tracking systems are highly accurate, but very expensive ( $5,000 + ) • Our system aims to implement accurate tracking with low cost, ideally less than $1000 Slide of 33
Major Functional Modules • Glove • Contains IR LEDs/markers on fingertips which will be tracked by the cameras/infrastructure • Infrastructure • Provides mounting points for the cameras • Cameras • Mounted in stereo pairs around periphery of infrastructure • Detect IR LEDs and pass images to server for processing • Server/Computer • Performs image processing, calculates position, and runs the GUI Slide of 33
System Block Diagram Slide of 33
Hardware Platforms • Cameras : Logitech QuickCam Pro 9000 • Varying resolution, as high as 1280 x 720 @ 15 fps • 75 degree field-of-view • USB 2.0 Connections • Computer : Dell XPS • Dual Core Intel Processor @ 2.93 GHz • 4 GB RAM • Gloves/LEDs : 950 nm Low Profile Infrared LEDs • Wide viewing angle • Batteries : 3V Lithium Coin or 2 x AAA • Infrastructure : 8020 Aluminum Framing Slide of 33
Software Platforms • Computer Operating System : Windows 7 • Best driver availability for cameras • Pre-built, standalone binaries for OpenCV available • Image Processing/Stereo Calibration : OpenCV • Open source computer vision library in widespread use • Hundreds of easy to use, highly optimized algorithms • Bindings for C, C++, and Python • Graphical User Interface : VTK & wxWidgets • VTK supports 3D scenes, allowing us to plot positions in 3D space • wxWidgets handles GUI events, and uses native look and feel schemes • Interface capability with AQUA-G • To handle gesture recognition Slide of 33
Cost Breakdown Slide of 33
System Detail: Gloves • Responsible for taking the user's individual finger movements as input to the system • IR LEDs /markers will be captured by the cameras • Pattern of LEDs/tape on the tip of each finger • Battery pack and wiring to connect the LEDs to the battery and power switch Battery pack will be placed on the back side of the glove Slide of 33
System Detail: Infrastructure • Enables communication between many cameras and the server • Provide a stable platform to mount the cameras • Allows for flexibility in camera mounting position and orientation Slide of 33
System Detail: IR Cameras • Cameras will be mounted in pairs around the infrastructure • Each pair of cameras will be able to resolve the 3D location of any point that both cameras can see • Use photo negatives as filters to remove visible light spectrum • This reduces the amount of image processing required, and makes identification of LEDs and markers easier • Cameras mounted so that pairs’ field of view is intersecting as large of an area as possible Slide of 33
Camera Film Filter Response to Light Slide of 33
Image Processing/Tracking • Images are processed and analyzed using OpenCV and fingertip LEDs and markers identified • Currently, the most difficult part of the project is going to be discriminating between fingertips • We address this by having unique markings on each fingertip • Once identified, locations will be remembered so they can be tracked • Temporal persistence will speed up identification in subsequent frames • OpenCV incorporates tracking functionality Slide of 33
Camera Calibration • Camera lenses introduce distortion in the images, particularly around the edges of the images • This causes inaccuracies in localization • Distortion must be removed so that localization remains accurate • In order to get accurate localization, each stereo pair of cameras must have parallel viewing rays • Exact alignment is extremely difficult, if not impossible • OpenCV has a calibration routine which introduces error correction to compensate for this • Once calibrated, cameras do not need calibration unless they are moved relative to each other or the infrastructure Slide of 33
Localization Process • Synchronize camera image captures • Remove distortion from images • Find fingertip LEDs and markers • Match corresponding LEDs/markers between images of stereo camera pairs • Resolve 3D location of each fingertip relative to the camera pairs • Transform location of each fingertip from camera-space to infrastructure space • Corroborate 3D location of each LED between camera pairs to give a final position estimates Slide of 33
Graphical User Interface • Simple 3D position viewer • Allow selection of a tracked device to give ID and position • Used for testing to show how the system is working • Moving forward the user interface will be done through other applications using the AQUA-G system for gesture processing Slide of 33
Testing Plan • Testing must be thorough in order to discover flaws and determine system capabilities and limitations • Individual components will be tested, as well as the system as a whole • Component Testing • Infrastructure Stability, Software Stability, Glove Battery Life, Glove Durability, Camera Synchronous Capture, Marker Recognition • System Integration Tests • Accuracy, Frame Rate, Position Update Rate, Occlusion, Light Interference, Rapid Movement Slide of 33
Intellectual Challenges • WhileOpenCVsolves many of our problems, there are still several difficult issues to overcome • Marker/Fingertip Recognition • Currently our biggest problem • Unsure of exact solution • Software Efficiency • Processing a large volume of data that must be done very efficiently • As many as 120 total images per second • Location Corroboration • Getting different camera pairs to agree on the location of a given fingertip • Which camera pair has the best view/is most accurate? Slide of 33
Preliminary Results • Two cameras have been ordered and recently delivered • IR Filters have been made, and work well for blocking visible light, but negatively affect image quality • New filters will be created for the new cameras • A work area has been set up for us the in Haptics lab the VRAC, and some development has begun • System computer has been set up with required software • Rudimentary tracking (2D, no localization) has been implemented • LEDs have arrived, and testing with the LEDs has begun as well Slide of 33
Preliminary Results IR LED with no filter on camera IR LED with filter on camera Slide of 33
Team Member Duties • Daniel Guilliams - CprE • Team leader, client communication, image processing, marker tracking • Andrew Joseph – EE • Weekly status reports, glove implementation, IR marker identification • Nicholas Allendorf – CprE • 3D Localization and position corroboration, GUI integration • Chris Daly - CprE • Cameras, IR filters, synchronous image capture • Adam Schuster - CprE • Infrastructure, camera calibration Slide of 33
Project Schedule Slide of 33
Next Semester Plan • Iterative prototype development • Start by tracking single fingertip in three dimensions with 1 cm accuracy and 15 updates per second, with one pair of cameras • Add another pair of cameras, and track a single fingertip with both sets • Track multiple fingertips • Track all fingertips on a hand • Track all fingertips on two hands • Gradually build up the system while finding out what works/doesn’t work • As a result, our final design may change from its current form Slide of 33
Questions? Slide of 33
Thanks for your time! Slide of 33