1 / 36

A Road Sign Recognition System Based on a Dynamic Visual Model

A Road Sign Recognition System Based on a Dynamic Visual Model. C. Y. Fang Department of Information and Computer Education National Taiwan Normal University, Taipei, Taiwan, R. O. C. C. S. Fuh Department of Computer Science and Information Engineering

borna
Download Presentation

A Road Sign Recognition System Based on a Dynamic Visual Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Road Sign Recognition System Based on a Dynamic Visual Model C. Y. Fang Department of Information and Computer Education National Taiwan Normal University, Taipei, Taiwan, R. O. C. C. S. Fuh Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R. O. C. S. W. Chen Department of Computer Science and Information Engineering National Taiwan Normal University, Taipei, Taiwan, R. O. C. P. S. Yen Department of Information and Computer Education National Taiwan Normal University, Taipei, Taiwan, R. O. C. violet@ice.ntnu.edu.tw

  2. Outline • Introduction • Dynamic visual model (DVM) • Neural modules • Road sign recognition system • Experimental Results • Conclusions violet@ice.ntnu.edu.tw

  3. Introduction -- DAS • Driver assistance systems (DAS) • The method to improve driving safety • Passive methods: seat-belts, airbags, anti-lock braking systems, and so on. • Active methods: DAS • Driving is a sophisticated process • The better the environmental information a driver receives, the more appropriate his/her expectations will be. violet@ice.ntnu.edu.tw

  4. Introduction -- VDAS • Vision-based driver assistance systems (VDAS) • Advantages: • High resolution • Rich information • Road border detection or lane marking detection • Road sign recognition • Difficulties of VDAS • Weather and illumination • Daytime and nighttime • Vehicle motion and camera vibration violet@ice.ntnu.edu.tw

  5. Subsystems of VDAS • Road sign recognition system • System to detect changes in driving environments • System to detect motion of nearby vehicles • Lane marking detection • Obstacle recognition • Drowsy driver detection • …… violet@ice.ntnu.edu.tw

  6. Introduction -- DVM • DVM: dynamic visual model • A computational model for visual analysis using video sequence as input data • Two ways to develop a visual model • Biological principles • Engineering principles • Artificial neural networks violet@ice.ntnu.edu.tw

  7. Video images Data transduction Sensory component Episodic Memory Information acquisition Spatialtemporal information Perceptual component STA neural module No Focuses of attention Yes Feature detection Categorical features Conceptual component CART neural module Category Pattern extraction Patterns CHAM neural module Action Dynamic Visual Model

  8. Physical stimuli Data compression Transducer Low-level feature extraction Sensory analyzer High-level feature extraction Perceptual analyzer Classification and recognition Conceptual analyzer Class of input stimuli Human Visual Process violet@ice.ntnu.edu.tw

  9. Neural Modules • Spatial-temporal attention (STA) neural module • Configurable adaptive resonance theory (CART) neural module • Configurable heteroassociative memory (CHAM) neural module violet@ice.ntnu.edu.tw

  10. STA Neural Network (1) ak ai Output layer (Attention layer) nk ni Inhibitory connection wij Excitatory connection xj nj Input layer violet@ice.ntnu.edu.tw

  11. Gaussian function G Attention layer ni rk nk corresponding neurons wkj nj Input neuron The linking strengths between the input and the attention layers STA Neural Network (2) • The input to attention neuron nidue to input stimuli x: violet@ice.ntnu.edu.tw

  12. Interaction + Lateral distance “Mexican-hat” function of lateral interaction STA Neural Network (3) • The input to attention neuron ni due to lateral interaction: violet@ice.ntnu.edu.tw

  13. STA Neural Network (4) • The net input to attention neuron ni : : a threshold to limit the effects of noise where 1< d <0 violet@ice.ntnu.edu.tw

  14. STA Neural Network (5) stimulus activation t 1 1 p pd The activation of an attention neuron in response to a stimulus. violet@ice.ntnu.edu.tw

  15. Orienting subsystem Attentional subsystem Category representation field F2 y Signal generator Reset signal S Input representation field F1 + q + + r + - + - - + + G p + G + + G G - v + + + u + + x - + G + w + Input vector i ART2 Neural Network (1) CART violet@ice.ntnu.edu.tw

  16. ART2 Neural Network (2) • The activities on each of the six sublayers on F 1: where I is an input pattern where where the J th node on F 2 is the winner violet@ice.ntnu.edu.tw

  17. ART2 Neural Network (3) • Initial weights: • Top-down weights: • Bottom-up weights: • Parameters: violet@ice.ntnu.edu.tw

  18. v1 v2 vi vn Output layer (Competitive layer) i Excitatory connection wij xj j Input layer HAM Neural Network (1) CHAM violet@ice.ntnu.edu.tw

  19. HAM Neural Network (2) • The input to neuron nidue to input stimuli x: nc: the winner after the competition violet@ice.ntnu.edu.tw

  20. Road Sign Recognition System • Objective • Get information about road • Warn drivers • Enhance traffic safety • Support other subsystems violet@ice.ntnu.edu.tw

  21. Problems • contrary light • side by side • shaking • occlusion violet@ice.ntnu.edu.tw

  22. Information Acquisition • Color information • Example: Red color • Shape information • Example: Red color edge violet@ice.ntnu.edu.tw

  23. Results of STA Neural Module— Adding Pre-attention violet@ice.ntnu.edu.tw

  24. Locate Road Signs — Connected Component violet@ice.ntnu.edu.tw

  25. Categorical Feature Extraction • Normalization: 50X50 pixels • Remove the background pixels • Features: • Red color horizontal projection: 50 elements • Green color horizontal projection: 50 elements • Blue color horizontal projection: 50 elements • Orange color horizontal projection: 50 elements • White and black color horizontal projection: 50 elements • Total: 250 elements in a feature vector violet@ice.ntnu.edu.tw

  26. Conceptual Component— Classification results of the CART Training Set Test Set violet@ice.ntnu.edu.tw

  27. Conceptual Component—Training and Test Patterns for the CHAM violet@ice.ntnu.edu.tw

  28. Conceptual Component—Training and Test Patterns for the CHAM violet@ice.ntnu.edu.tw

  29. Conceptual Component—Another Training Patterns for the CHAM violet@ice.ntnu.edu.tw

  30. Experimental Results of the CHAM violet@ice.ntnu.edu.tw

  31. Experimental Results violet@ice.ntnu.edu.tw

  32. Other Examples violet@ice.ntnu.edu.tw

  33. Discussion • Vehicle and camcorder vibration • Incorrect recognitions Input patterns Recognition results Correct patterns violet@ice.ntnu.edu.tw

  34. Conclusions (1) • Test data: 21 sequences • Detection rate (CART): 99% • Misdetection: 1% (11 frames) • Recognition rate (CHAM): 85% of detected road signs • Since our system only outputs a result for each input sequence, this ratio is enough for our system to recognize road signs correctly. violet@ice.ntnu.edu.tw

  35. Conclusions (2) • A neural-based dynamic visual model • Three major components: sensory, perceptual and conceptual component • Future Researches • Potential applications • Improvement of the DVM structure • DVM implementation violet@ice.ntnu.edu.tw

More Related