250 likes | 623 Views
David Gitz , EE. Obstacle Avoidance on a Quadrotor UAV Focus: Obstacle Detection. Why?. Quadrotor UAV’s have fast moving propellers that if contacted by solid objects, damage will ensue.
E N D
David Gitz, EE Obstacle Avoidance on a Quadrotor UAVFocus: Obstacle Detection
Why? • Quadrotor UAV’s have fast moving propellers that if contacted by solid objects, damage will ensue. • Indoor autonomous navigation by a airborne platform is a new environment for quadrotor’s, that are full of solid objects (i.e. walls, people, clutter, etc.) • A system that can find and avoid these objects is useful to prolong the health of the UAV along with maintaining a safer environment for the other people in the area.
Quadrotor UAV CAD Design Current Build
Topics • Objectives • Constraints • Visible or Depth? • Theory & Methodology • How It Works • Operation & Demo • References
Objectives • Acquire images with Kinect into a Laptop running Ubuntu/ROS • Determine "mask" of quadrotor that would be in the Kinect's FOV and adjust the acquired image accordingly, i.e. find how it changes when the Propeller's are spinning. • Implement obstacle finding algorithm and show found obstacles in a GUI window. • Transmit obstacle locations/trajectories to the Motion Controller.
Constraints • All acquisition/processing/command generation must be performed onboard the UAV. • The heaver the system is, the less time the UAV can stay airborne, if at all. • Computer boards are limited by their processing power, RAM, etc. • Communications link Max speed: 115200 bps • Cost, Space ,etc.
Visible or Depth? • The Kinect has a RGB Camera, and a IR Laser Array/IR Camera (here, “Depth Camera”). • Visible Camera more “familiar” to people. But using the visible camera alone, not capable of performing distance measurements. • With significant image processing techniques this is somewhat possible… • Depth Camera determines the actual distance to every point, automatically.
Visible or Depth? No obstacles here, just the environment. A small wooded rod. A larger wooden board. The same wooden board again.
How It Works Project Focus
Operation • Using an IR Laser array and a IR-sensitive camera, the Kinect creates a depth_image, where each pixel in the image is the distance to the Kinect calculates in meters. • A Linux computer running ROS-Fuerte with a Python algorithm calculates the mask_image and the sector_image. • The relevant values from the sector_image are transmitted over a serial link the Motion Controller board, which adjusts the UAV’s flight trajectory to avoid these obstacles. • Using a threshold of .5 meters, the MC adjusts the motor outputs.
Data • depth_image: • The original image created from the Kinect depth sensors. • A 640x480 pixel image with each pixel’s value encoding the distance to each point, in meters. • Since this image contains actual physical data, it is not appropriate to perform many image processing techniques, such as filtering, equalization, etc. • mask_image: • Since the Kinect FOV overlaps the UAV’s structure, a mask image is created with NaN’s wherever the structure interferes with the depth_image and 1’s wherever it doesn’t. • sector_image: • The sector_image is split into a 3x3 grid, where each pixel in each grid is the value of the minimum distance of the depth_image grid. • A 640x480 pixel image, containing 9 relevant values.
Initial Development The depth image is made up of a 640x480 grid of pixels, with each pixel having the value in meters to its IR laser's endpoint. The depth image is split into a 3x3 grid, with the closest depth measurement in each sector defining the gray level in the OA_Kinect grid. The distance measurement for each of these sectors is displayed on this grid as well. The closer an object is to the camera, the darker it appears in this view of the OA_Kinect image. When every point of the OA_Kinect image is less than the minimum distance that the Kinect will measure, the sector turns Red. Of course, we are not limited to a 3x3 grid. Here a 25x25 grid is generated, with a significant performance hit.
Results A mask is overlaid on the depth image. Although the mask is quite large, it could be decreased to just the objects that are interfering without much trouble. The Kinect FOV mounted on the UAV. Notice although the UAV structure and propeller are very near the Kinect, the image on the right ignores this as it is within the Kinect’s minimum distance. So masking these obstructions is irrelevant. However, if the Kinect’s minimum distance was much smaller this would be an issue. The UAV’s propellers spinning. Notice that the Depth Camera basically ignores the propeller now. The propeller is removed, and the Depth image does not show a difference. We can proceed with creating an image mask while disregarding the propeller.
Results The Linux Computer (Primary Controller) transmits the $CAM,DIST packet to the Motion Controller.
Performance Memory used by entire Program: 740 MB.
Future Work • Develop an AI algorithm to perform the Motion Controller action that is most appropriate. • Current development was on a Laptop running Ubuntu 12.04. In the future this would be implemented on a SBC (Odroid-X2) running ROS Groovy and Lubuntu. • Test in an airborne environment. • Speedup Software and/or lower memory use
Conclusions • Due to the size of the UAV built and the minimum distance of the Kinect’s depth camera, an image mask is not needed to block out the effects of the UAV’s structure. • The rotating propeller does not interfere with the Depth Camera at all. • The Kinect is a capable sensor to use in an Obstacle Detection system.
More Info • Project Website https://code.google.com/p/icarus-uav-system/wiki/ICARUS_OBSTACLE_AVOIDANCE • Source Code https://bitbucket.org/uicrobotics/icarus_oa/overview • ICARUS Project https://code.google.com/p/icarus-uav-system/wiki/Home
References • http://wiki.ros.org/fuerte • http://wiki.ros.org/cv_bridge