250 likes | 429 Views
Marcel-Titus Marginean and Chao Lu Computer & Information Sciences Towson University. A Distributed Processing Architecture for Vision Based Domestic Robot Navigation. Highlights: Distributed architecture for indoor robot navigation On board and external computer vision
E N D
Marcel-Titus Marginean and Chao Lu Computer & Information Sciences Towson University ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Highlights: Distributed architecture for indoor robot navigation On board and external computer vision Communication protocol for cooperative localization and mapping Distributed processing and decision making ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Rationale: Aging population grows, and they require assistance Robots can help with domestic tasks Enable Independent Living instead of Institutionalization Allows the aging population to live in their own homes while monitoring the health status and providing assistance ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Rationale-Cont. Houses already have networks and surveillance cams Vision processing is very CPU/Memory intensive Energy efficient embedded computers on robots still low in resources Redundancies provides fault / error tolerance ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Rationale-Cont. Computer Vision is most promising technology in robot navigation We employ helper technologies to ease the load Innate / priory knowledge about the environment should be used to reduce the scope of the problem ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Previous work: Mehdi et al. helped navigation with ultrasonic sensors and RFID tags Souza and Gonclaves used stereo vision for mapping Fernandez et al. placed artificial landmarks on the ceiling and used a vertical looking camera on the robot At Cluj-Napoca a laser beam has been used to detect dynamic obstacles ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Previous work-Cont. Pizaro et al. used a rig of calibrated and synchronized cameras Chakravarty, Punarjay, Jarvis and Ray also helped the mobile robot navigation with external cameras ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation Overview Base Station – One or more general purpose computers Robot – Embedded System + Camera + Inertial Unit Network – Typical House WiFi + Wired Network Fixed Cameras – IP cameras wired or WiFi Engineering console – Laptop used for development and testing
Epipolar Geometry: Two or more cameras oversee the same scene from two different points and orientations The projections of a point in space on the two image planes are related by an equation involving the Essential Matrix 8 pairs of matching points are to be identified in order to be able to calculate the Essential Matrix Pose and relative position of cameras can be calculated from Essential Matrix using (SVD) Singular Value Decomposition method ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Epipolar Geometry: One camera can be the camera on the robot and the other the camera mounted on the wall ?Can be used to either calculate the robot position in respect to fixed camera or to accurately map objects in the environment Susceptible to failures if difference in pose / position is too large or if similar patterns located in different places are encountered ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Object Tracking: Fixed camera oversee the scene and can map the movement of the robot A Gaussian Mixture Model is used for background subtraction in order to detect the moving or moved objects in respect to fixed background Each moving object is defined by a status vector containing id, position, velocity and the confidence in the measurement. ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Optical Flow Navigation: Used by on-board (robot) computer to detect potential collisions and for reactive navigation when outside of the view of fixed cameras. Optical flow field is a velocity field representing the projection on the image plane of the motion of objects in 3D space. Can be used to calculate the distance from the moving robot to the obstacles in front or to maintain distance from walls when navigating into a hallway. ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Fixed Infrastructure: Wall mounted DCS-930L and DCS-932L IP cameras located near ceiling overlooking the room Typical house network with 802.11n Wi-Fi router having 10/100 wired Ethernet A pair of computers running Mageia Linux connected to the router with wired Ethernet. Called Base Station they are used for for video processing, model/map building, object tracking and mission planing Object recognition for future research and will also take place on the Base Station ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Mobile Infrastructure: Mobile platform having Ackerman Steering BeagleBoard-XM embedded computer running Debian ARM Linux connected to network with USB Wi-Fi dongle LI-5M03 Camera Board connected directly to BeagleBoard bus Inertial measurement unit with ADXL345 accelerometer and L3G4200 MEMS gyroscope Additional circuitry For future research we have in the plan to explore the Adapteva’s Paralella board to add extra processing ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation Video Capture Hardware: Wall mounted DCS-932L WiFi IP Camera BeagleBoard-XM with LI-5M03 Camera on test bench
Software Development Environment: OpenCV 2.4.5 for image processing QT 4.8.5 and OpenGL libs for GUI development C++ programming language, gcc version 2.7.2 Eclipse CDT and QT-Creator as IDE Mageia Linux Desktop ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Software Architecture: Modular Architecture using Active Objects Message Passing Asynchronous Protocol Messages structure designed to minimize the use of network bandwidth Most of image processing localized on each module. Large data sent between modules only “AS NEEDED” upon request Communication Infrastructure API abstracts the location of modules Each Module is an Active Object with at least two threads (communication, main processing) ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation Software Modules CM – Camera Module SAM – Situation Awareness Module RM – Robot Module ARM – Autonomous Robot Module
Camera Module: One CM for each fixed camera. Capture, pre-processing, blob tracking Sending periodic Blob Tracking Vectors to SAM Upon request, send whole images or sub-images for analysis by other modules ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Situation Awareness Module: Maintain a live map of the environment, keeping track of people, objects and robots. Receive periodic tracking vectors from CMs and RMs and match blobs with robots and provide robot tracking info Future developments may include object recognition and maintaining a database for recognition task ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Robot Module: Robot path planning and mission control Uses epipolar geometry to map objects or robot pose by requesting images from both ARM and CM Translate tracking information from SAM global coordinate system into robot’s local coordinate system Future research direction may include landmark tracking ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Autonomous Robot Module: Optical Flow (OF) processing and reactive OF navigation PID controller to maintain a required trajectory Honors request from RM for (sub) images Able to temporary overrule RM commands if optical flow detect high potential for collision Future research direction may include more processing power to enable true autonomy, more sensors and actuators for “eye-hand coordination” ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Typical navigation scenario: RM interrogates SAM for a map RM uses Dijkstra algorithm to find the path RM downloads navigation instructions into ARM CMs keeps broadcasting blob position to SAM SAM provides real-time tracking information to RM ARM uses a PID controller to navigate on path using tracking info from SAM as feedback ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Typical navigation scenario: Robot encounter obstacles unknown to SAM Optical flow on ARM detects it as an obstacle ARM sends obstacle info to RM RM requests image from a CM and from ARM RM maps the object using epipolar geometry Send information to SAM to update the occupancy grid Navigation re-starts with new path planning ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Ideas for future direction in research: Explore landmark based navigation and object recognition Increase the processing power on mobile unit by using something like Adapteva’s Paralella board The ability to create a Visual Aspect Indexed Database for object recognition from a large subset of classes of objects ICCCS A Distributed Processing Architecture for Vision Based Domestic Robot Navigation