510 likes | 720 Views
Biologically Inspired Intelligent Systems. Lecture 08 Dr. Roger S. Gaborski. Biologically Inspired Object Categorization in Cluttered Scenes. Biologically Inspired Object Categorization in Cluttered Scenes Classification System Preprocessor Feature Extraction Neural Network (FENN)
E N D
Biologically Inspired Intelligent Systems Lecture 08 Dr. Roger S. Gaborski
Biologically Inspired Object Categorization in Cluttered Scenes • Biologically Inspired Object Categorization in Cluttered Scenes • Classification System • Preprocessor • Feature Extraction Neural Network (FENN) • Neural Network Classifier • Training and Testing • Cat – Dog Category Problem • Car – Background Category Problem
Approaches • Visual System has hierarchical architecture • Many approaches to implement this idea • Layers of Competitive Neurons
The What and Where Pathways • The human visual pathway can be divided into two pathways • Ventral: the ‘what’ pathway • Dorsal: the ‘where’ pathway
Ventral Pathway (red arrows) V1-V4: Contours, color, texture Lateral occipital area and ventral occipito-temporal cortex (VOT): integrate local information to detect surfaces, objects, faces and places (a specific area responds to buildings, houses and vistas) Parahippocampal cortex and rhinal cortex: active when brain interprets the stimulus in the context of stored memories Visual areas also contain mu-opioid receptors which are involved in the modulation of pain and pleasure in other parts of the brain http://www.condition.org/as65-6.htm (“Perceptual Pleasure and the Brain”, Irving Biederman, Edward A. Vessel)
The Ventral Pathway • V1, V2, V4 and IT (inferotemporal lobe) • Each area contains receptive fields of various sizes • V1 and V2 extract low level features from the visual field • V4 extract more complicated features • Higher level regions recognizes objects regardless of size, rotation or location
Classification System Preprocessor V1 and V2 V4 Higher Level Regions Feature Extraction Neural Network (FENN) Classifier Network
Convert to Gray Scale Image Gabor Filter Bank Color Image Resize Image 128x128 128x128x16 Gabor Features Preprocessor Normalize Features Input to FENN
45 degrees 0 degrees 135 degrees 90 degrees Gabor Models of Directional Receptive Fields 4 Orientations x 4 Frequencies = 16 filters
FEATURE MATRIX Output Layer 32x32 Neurons 150 connections Third Layer 32x32 Neurons 150 connections Second Layer 32x32 Neurons 150 connections First Layer 32x32 Neurons weight 256 connections FEATURES FROM GABOR PROCESSING FENN ARCHITECTURE
Training • After Preprocessing • All weights randomly initialized • First, propagate the inputs forward and calculate the neuron’s output value • Adjust the weights • Hebbian Learning – only consider current output value • Hebbian Trace Learning – consider current and previous outputs • Train one layer at a time, freezing weights layer by layer • After training all four layers, freeze all weights on FENN
Hebbian Training in Competitive Networks • Neuron Output: Each neuron has an input vector x, an activation value h, and output firing rate yi’ and a competitive interaction between neighboring neurons resulting in the final firing rate, yi hi = ∑j xj wij yi’ = f(hi) where f is a sigmoid yi = g(yi’) where g is a nonlinear function that results in a contrast enhanced result among neighboring neurons
Hebbian Training in Competitive Networks • Weight Adjustment: Adjust weights between connected neurons: δwij = αyixj • If the xj and yi are both large the weight change is large (basic Hebbian learning) • If either value is small, the weight change is small • Normalize resulting weight vector ∑j (wij)2 = 1
Trace Hebbian Learning Δwj = α yτ*xj yτ* = (1-η) yτ + η yτ-1 Where: yτ Output of neuron yτ-1 Output of neuron from previous time step η Trace value Foldiak 1991, Wallis 1996, Rolls and Milward 2000
Output Layer (3 neurons) ….. Hidden Layer (32x32 neurons) …………. Feed forward, fully connected Input Layer, 32x32 neurons) Output Layer of FENN 32x32 Neurons Classifier Network Higher Level Regions
Training the Classifier Network • First, the FENN is trained as previously described • A labeled image is processed by the preprocessor and FENN network • Output of FENN is applied to the Classifier Network • The Classifier is trained using BEP • Train the Classifier for 100-200 epochs
CAT and DOG categorization Database Source: Bruce Draper, Colorado State University
Representative Images from Databases Training Testing
Feature Extraction using Gabor Filters Frequency Orientation
Feature Extraction Neural Network(FENN) Response of each of the four layers of the FENN
Feature Extraction Neural Network(FENN) Response of each of the four layers of the FENN
dog cat Output of FENN:Typical Trained CAT and DOGFeature Matrix Most Active Neurons for a Cat or a Dog
CAT Model CAT and DOG Test Image compare with Feature Matrix A Response to Dog Image Response to Cat Image Red Pixel = correct classificationYellow Pixel = False PositiveGreen Pixel = False Negative (missing features)
DOG Model CAT and DOG Test Image compare with Feature Matrix A Response to Cat Image Response to Dog Image Red Pixel = correct classificationYellow Pixel = False PositiveGreen Pixel = False Negative (missing features)
Cat Model Response to Cat and Dog Databases ON TESTING DATA The number of fired neurons when test with 20 cat images. (Green line) the number of fired neuron responding to dog features. (Red line) the number of fired neuron responding to cat features.
Dog Model Response to Cat and Dog Databases ON TESTING DATA The number of fired neurons when test with 20 dog images. (Green line) the number of fired neuron responding to dog features. (Red line) the number of fired neuron responding to cat features.
Train Classifier on training data Correct Classification for cat on testing data (Target: cat [101]) [ 0.96784691684662 0.029860456759128 0.967780953419655 ] [ 0.808654146731156 0.193775062928928 0.810705977968617 ] [ 0.862624569815469 0.114895121354981 0.88715233725468 ]
Classifier Results • Correct Classification for cat (Target: cat [101]) [ 0.79969584530003 0.194233820328146 0.827135606046827 ] [ 0.973197302169496 0.0306113760382465 0.974749260213711 ] [ 0.800949464100721 0.189424785303809 0.798697594171034 ]
Classifier Results • Correct Classification for dog (Target: dog [010]) [ 0.0075413424837778 0.989143744690816 0.0099910394357271 ] [ 0.014179073633550 0.987664666208226 0.012902572374938 ] [ 0.0541521499855850.956211717101485 0.050471088281532 ]
Classifier Results • Correct Classification for dog (Target: dog [010]) [ 0.004023583781089 .994839767880561 0.004907675442155 ] [ 0.005174706300875 0.994262086212736 0.0049275674213702] [ 0.013550412236110 0.98434104269817 0.016463451997677 ]
Classifier Results • Incorrect Classification [ 0.978011606173325 0.0258301491858342 0.974828097071138 ]
Results RESPONSE TRUTH TESTING: 20 cat images 20 dog images
Car – No Car Category • The goal of this research is to develop a system that can detect the presence of cars in a cluttered image • The cars can be any color, size or point of view
Car – No Car Category • Preprocessor to extract Gabor features • FENN Training • Segmented Cars (no background) • Classifier Training • Images with Cars and Background • Images with Background only (no cars) • Testing Databases • Images with Cars and Background • Images with Background only (no cars)
Feature Extraction Neural Network(FENN) Response of each of the four layers of the FENN
Training Error for BEP Target: CAR: [0 1 0] NON-CAR: [1 0 1] 100 epochs
Limitations of Size Invariance CORRECT CORRECT INCORRECT
Correct Classification [ 0.00990219614175935 0.990010344793649 0.0115420682481149 ] [ 0.0943568509852068 0.919607740407239 0.0858715025324481 ] [ 0.0041841578577629 0.995940735065233 0.00474821901452296 ]
Correct Classification [ 0.829762256975043 0.198917252447956 0.820634806010072 ] [ 0.941890493682531 0.0641537339115563 0.944403961815936 ] [ 0.822300973953372 0.168583446615338 0.748539715282723 ]
Results RESPONSE TRUTH TESTING: 25 Background only images 50 Car and Background images