220 likes | 355 Views
MPR Intersection Experiment Beijing, 2011 Spring Jointly by PKU and HEUDIASYC. The MPR Project. Task 0 : Coordination + Advisory Board. Task 3 : Cross-experiments and comparative studies. Task 2 : Reasoning, Scene Understanding. Task 1 : Multimodal Online Perception.
E N D
MPR Intersection ExperimentBeijing, 2011 SpringJointly by PKU and HEUDIASYC
The MPR Project Task 0: Coordination + Advisory Board Task 3: Cross-experiments and comparative studies Task 2: Reasoning, Scene Understanding Task 1: Multimodal Online Perception
Experiment Objective • Objectives • Collect multi-modal sensing data for on-vehicle perception algorithm development • On-vehicle multi-modal sensing • Collect data for ground truth generation • Roadside multi-modal sensing • Scenario • Intersection and/or approaching intersection
Design : scenario Laser Camera Host V S N
Design : On-vehicle sensing GPS/IMU: localization Two Flea2 Compose stereo Cameras: Moving object detection Road plane detection Horizontal laser : SLAM / moving object detection tracking Downward laser : drivable space detection
Design: roadside sensing Laser Camera Host V Haidian Gymnasium Laser1: SW, NPC4 S Peking Univ. N Laser2: NW, NPC1 Server NPC7 & AP Laser3: NN, NPC5 An overhead bridge ChangChun Dorm. of Peking Univ.
Design: host vehicle’s driving trajectory Host Vehicle Trajectories S N An overhead bridge
Conduct Intersection Experiment • Time and Place: • 05/23/2011, 13:00-17:00 • Three-way intersection, nearby the west gate of PKU
Experimental Facilities • Sensors and PCs • Intersection • 3 roadside lasers • 1 roadside video camera (record data to tape, convert to avi later) • 4 laptops: three for roadside lasers, one for server • Vehicle • 2 lasers: horizontal and obliquely downward at the front of car • Stereo camera: 2 mono cameras • GPS • 2 on-vehicle pc: one for laser and GPS data collection, one for stereo cameras • Others • Batteries, Cables, Patch Boards, Wireless AP (for time synchronization and data transmission), Box (for calibration), etc…
Beyond data acquisition • Calibration • On-vehicle sensor calibration (before data acquisition) • Two-stereo cameras • Camera with horizontal laser • Road-side sensor calibration (after data acquisition) • Multi-laser scanners • Laser with camera • Synchronization • Synchronization all PCs by broadcast time through network
On-vehicle sensor calibration • Online calibration after sensor setting, just before data acquisition • Calibration 1: • Stereo Camera (Philippe Xu) • Calibration 2: • Horizontal Laser with each camera (Chao Wang) • Problem: forgot to save the calibration parameters due to mistake <(u, v)、(x, y, z=1.0)> (x, y, ?)
Roadside sensor calibration • Requirement to sensor setting • Lasers on road side must be set for horizontal scanning • Offline calibration after data acquisition • Can be conducted onlinely, while need network and time • Calibration 1: • Multi-laser: using background objects / temporarily static objects as the land markers • Calibration 2: • Video with laser: • put box at different locations of the intersection • use box and static nature objects as the land markers for coarse calibration • Use moving objects as the land markers after time synchronization
Roadside sensor calibration 2 : Laser with Video Locations of the landmark box The area of location points is larger, calibration result is better 4 5 6 3 2 1
Roadside sensor calibration 2 : Laser with Video Manually mark the corresponding points using a self-generated software
Data Acquisition • Procedures • Start the measurements of ground lasers and video camera. • Start the measurements of on-vehicle lasers and cameras. • Let the host vehicle run across the intersection
Experimental Data • Roadside data acquisition for ground truth generation • Multi-laser Data • LMS data files • In LMS format (user defined) • The data from different laser scanner is saved in different data file, denoted by extension name, e.g. *.lms1, *.lms2, *.lms3 • A calibration file • Geometrical parameter and time difference • Video • Originally recorded in tape due to the limited number of sensors, converted to AVI after experiment • AVI • Videos are saved in AVI format • LOG • Absolute time of the start point of AVI • A calibration file • Intrinsic and extrinsic parameter of camera with respect to laser coord.
Experimental Data • On-vehicle data acquisition for multi-modal perception • Horizontal and downward lasers • LMS data files • In LMS format (user defined) • The data from different laser scanner is saved in different data file, denoted by extension name, e.g. *.lms1, *.lms2 • A calibration file • Geometrical parameter • Time • Laser data are acquired by the same computer, almost no time delay • Two Cameras • *.avi: stereo video • *.txt: start time of video • GPS data • *.pos (vehicle pose 10Hz, time, x, y, z, roll, pitch, yaw)
Data Check • On-vehicle data • Video quality (Wang Chao, how to check?) • Image quality (for pedestrian, vehicle, road infrastructure detection) • Frame rate • Stereo video processing (Philippe Xu) • Calibration • Road plane extraction • Laser-data (Wang Chao) • Horizontal laser with GPS (->OGM, to be finished this week) • Downward laser with GPS (->3D points , to be finished this week) • Laser-video fusion (Wang Chao) • Calibration (failure) • Fusion-based visualization (large error)
Data Check • OGM
Data Check • Road-side data • Time Synchronization • Multi-Laser • There are also time different between laser scan data due to the time delay in wireless time log broadcast • Refine time log using a self-generated software • Laser with video • Manually find the time point of the same motion at different sensor data • Referring to laser time, calculate the starting time of each AVI, modify the LOG file • Laser Quality • Offline multi-laser calibration / refine the online calibration result • Visualize the multi-laser data • Laser and Video data fusion • Offline laser-video calibration (self-generated software) • Visualize the laser-camera overlap result
Next Step • What perceptual algorithm do we study? TASK 1: Multimodal perception • Multimodal sensor fusion-based object detection, recognition and tracking • Navigable space detection (road geometry, boundaries, lanes) • Static object detection: signs, trees, facades • Moving/movable object detection and tracking: cars, cycles, pedestrians • Multimodal data constrained SLAM (GPS, GIS) • Data representation and vicinity dynamic maps • What knowledge do we need to reasoning? TASK 2: Reasoning and scene understanding • An open comparative dataset for testing for cross-cultural robustness in traffic scene understanding • Learning for scene semantics and moving object behaviors • Traffic situation awareness with uncertainty, scene semantics and information fusion