330 likes | 675 Views
Verification of specifications and aptitude for short-range applications of the Kinect v2 depth sensor. Cecilia Chen, Cornell University Lewis’ Educational and Research Collaborative Internship Project (LERCIP) NASA Glenn Research Center, Graphics & Visualization/Dr. Herb Schilling. 7/22/2014.
E N D
Verification of specifications and aptitude for short-range applications of the Kinect v2 depth sensor Cecilia Chen, Cornell University Lewis’ Educational and Research Collaborative Internship Project (LERCIP) NASA Glenn Research Center, Graphics & Visualization/Dr. Herb Schilling 7/22/2014
Purpose of this study • Validate the published specifications for the Microsoft Kinect v2 depth sensor • Resolution and x-y position accuracy • Depth-sensing accuracy • Near-range sensing limit • Determine the sensor’s potential for use in short-range applications • Feasibility of repurposing the Kinect for functions requiring an operating distance of approximately 0.5 m from the sensor
Overview of topics discussed • Introduction • Microsoft Kinect v2 • Infrared and Depth streams • Time-of-flight • Preliminary Calculations • Is the depth camera’s error range acceptable? • Possibility of errors in position due to low resolution • Possibility of errors in angle due to noise
Overview of topics discussed • Calibration and Experimental Verification • Software preparation • Calibration • Experimental verification • Conclusions • Findings • Sponsors
Microsoft Kinect v2 • Primarily used for gaming and natural user interface • Color, infrared, and depth streams • 512424 depth resolution • 0.5–4.5 m depth sensing range • 30 frames per second
Time-of-flight • TOF is a form of LIDAR • Emitter sends pulses of infrared light • Detector senses returning light • Software calculates the distance between a point and the sensor based on round-trip time and speed of light
Is the depth camera’s error range acceptable? • Microsoft claims depth measurements are accurate to within 1 mm • Guidelines for short-range use of the depth sensor • Defined by an operating distance (between the Kinect and object) of approximately 0.5 m • Near-range of the Kinect v2 supposedly starts at 0.5 m • Given a surface oriented perpendicular to the depth axis: • Position – be able to locate a point on the surface to within 2 mm • Angle – be able to measure inclinations on the surface to within 10° • Both to be confirmed through calculations using an upper error bound
Possibility of errors in position due to low resolution • 512424 depth resolution • Field of view: • 70° horizontal • 60° vertical y z x
Possibility of errors in position due to low resolution Assuming a distance of 0.7 m between the surface and Kinect: 35° 35° 0.7 m x/2 x/2 Horizontal slice of FOV
Possibility of errors in position due to low resolution Assuming a distance of 0.7 m between the surface and Kinect: 30° 30° 0.7 m y/2 y/2 Vertical slice of FOV
Possibility of errors in position due to low resolution Assuming a distance of 0.7 m between the surface and Kinect: Pixel density:
Possibility of errors in position due to low resolution Given a circular section of the surface 50 mm in diameter: This works out to a ratio of 3.65 square millimeters () per pixel. 50 mm diameter
Possibility of errors in angle due to noise For simplicity, take a plane defined by two points 50 mm apart representing the same circular surface: 50 mm diameter x z y Axis orientation
Possibility of errors in angle due to noise y θ 50 mm z Kinect x Δz
Possibility of errors in angle due to noise How large can Δz get before θ falls outside the allowed angle range? Let θ= 10°: θ 50 mm Δz
Software preparation • Use C# to interact with Microsoft Visual Studio and the Kinect for Windows SDK • Modify and expand the Depth Basics code included in the SDK (Preview 1404) • Decreasing the grayscale gradient range for easier visual distinguishability between depths • Making the window recognize a hovering cursor • Writing the (x, y, z) coordinates of the cursor to the image window in real-time • Averaging depth readings at a given point to smooth out jumpy data • Capturing depth arrays of multiple frames • Outputting collected data to CSV files for further analysis
Before After
Calibration • Step 0: Design and assemble a rig to hold the Kinect steady • Step 1: Level the sensor • Step 2: Set up a base surface for measurements at a height similar to the intended distance between the Kinect and interaction area
Calibration • Step 3: Check for irregularities
Experimental verification • Step 4: Measure a sloped calibration block of known height 50 mm
Experimental verification • Step 5: Lower the base height • Step 6: Measure a sloped calibration block of known height 50 mm
Experimental verification • Step 7: Repeat with calibration block(s) of different heights 64 mm
Findings • Successful verification of the published specs • Kinect v2 appears to be a promising depth sensor for short-range applications