240 likes | 255 Views
This paper presents a robust method for segmenting freight containers in train monitoring videos. The proposed method combines frequency and spatial domain information and is robust to various background conditions. It can be used in an intelligent train monitoring system.
E N D
Robust Segmentation of Freight Containers in Train Monitoring Videos Qing-Jie Kong*, Avinash Kumar**, Narendra Ahuja**,Yuncai Liu* **Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign *Institute of Image Processing and Pattern Recognition Shanghai Jiao Tong University, Shanghai 200240, China WACV 2009
Input • Input : Video of an Intermodal Freight Train captured from a fixed camera with background visible before the train arrives.
Video Capture of an Intermodal Freight Train Viewing Volume Inter-modal Train Camera
Output • Output : Video with background removed and thus foreground consisting of only the intermodal train. • Main Application : Fast and Automatic calculation of gap lengths between consecutive containers
Major Difficulties Varied Outdoor Imaging Conditions
Major Difficulties Different Types of Containers
Four Stage Coarse-to-Fine Framework • Stage 1: Detecting Train Region • Stage 2: Removing Background Gap • Stage 3: Detecting Single Stack • Stage 4: Refining Segmentation Result
Stage 1: Detecting Train Region Partition of the region Pixel signal in temporal domain Power spectrum of the signal
Stage 1: Detecting Train Region A frame in a video Frequency image of the video Histogram of the frequency image
Stage 1: Train Region Detection Thresholded result By the morphological operations Final result of the first stage
Stage 2: Removing Background Gap Background Model Background Removal Background Update
Background Model A background image A sub-region of the background image Histogram of the sub-region
Background Removal A frame in a video Result of the recognition Segmentation result
Background Update The background between two containers spans the complete background every some frames Splice the detected middle backgrounds to rebuild a new background image The updating calculation happens as soon as the middle background region completes the scan
Stage 3: Detecting Single Stack A frame in a video Result after the first two stages blob Segmentation result
Stage 4: Refinement of Segmentation Results Background image A window of background Result before refinement Result after refinement
Combination of Color Information Do all the processing in Stage 2 to the RGB channels respectively Combine the results of the three channels by the AND operation
Experiments Video data: 150 videos Include 1222 containers and a wide range of background conditions: – clear blue sky – bright sunlight – static heavy clouds in the day and evening – moving heavy clouds in the day and evening – rainy day (water on lens)
Experiments Success ratio of Stage 1: 96% Success ratio of the last three stages:
Experiments Operation speed – computer : Intel(R) Core(TM)2 Due CPU 2.53-GHz processor and 3.2-GB MHz RAM. – average processing speed: 4 frames per second (fps)
Conclusion The proposed method – combines the information in frequency and spatial domain – is robust to varieties of background conditions – can employ videos from un-calibrated cameras – is being integrated into a real time vision system for intelligent train monitoring