410 likes | 426 Views
Evolutionary Feature Extraction for SAR Air to Ground Moving Target Recognition – a Statistical Approach. Evolving Hardware Dr. Janusz Starzyk Ohio University. Neural Network Data Classification. Concept of “ Logic Brain” Random learning data generation Multiple space classification of data
E N D
Evolutionary Feature Extraction for SAR Air to Ground Moving Target Recognition – a Statistical Approach Evolving Hardware Dr. Janusz StarzykOhio University
Neural Network Data Classification • Concept of “ Logic Brain” • Random learning data generation • Multiple space classification of data • Feature function extraction • Dynamic selectivity strategy • Training procedure for data identification • FPGA implementation for fast training process
Neural Network Data Classification Abdulqadir Alaqeeli, and Jing Pang • Concept of “ Logic Brain” • Threshold setup converts analog to digital world • “Logic Brain” is possible based on artificial neural • network • Random learning data generation • Gaussian distribution random multiple dimension • data generation • Half data sets prepared for learning procedure • Another half used later for training procedure
Neural Network Data Classification • Multiple space classification of data • Each space can be represented by a set of minimum base vectors • Feature function extraction and dynamic selecting strategy • Conditional entropy extracts information in each • subspace • Different combinations of base vectors compose the redundant sets of new subspace • expansion strategy • Minimum function selection • shrinking strategy
Neural Network Data Classification • FPGA implementation for fast training process • Learning results are saved on board • Testing data sets are generated on board and sent • through the artificial neural network generated on • board to test the successful data classification rate • The results are displayed on board • Promising application • Especially useful for feature extraction of large data sets • Catastrophic circuit fault detection
Class A X X X X X X X X X X O X X X X O O O O O O O O O O Class B Information Index: Background • A priori class probabilities are known • Entropy measure based on conditional probabilities
Information Index: Background • P1 and P2 and a priori class probabilities • P1w and P2w are conditional probabilities of correct classification for each class • P12w and P21w are conditional probabilities of misclassification given a test signal • P1w , P2w, P12w and P21w are calculated using Bayesian estimates of their probability density functions
Information Index: Background • probability density functions of P1w , P2w, P12w, P21w
nonuniform grid uniform grid S i S k S k S i S i < S k S i = S k for N dimensions , mn grid points are needed to estimate Direct Integration
pdf1 pdf2 W(Xi) xi Xi generated with pdf1 Monte Carlo Integration pdf
Information Index: Monte Carlo Integration • To integrate the probability density function • generate random points xi with pdf1 • weight generated points according to • estimate the conditional probability P1w using
Information Index: Status • MIIFS was generalized to continuous distributions • N-dimensional information index was developed • Efficient N-dimensional integration was used • Information error analysis was performed • Information index can be used with non Gaussian distributions • For small training sets and low information index information error is larger than information
Optimum Transformation: Background • Principal Component Analysis (PCA) based on Mahalanobis distance suffers from scaling • PCA assumes Gaussian distributions and estimates covariance matrices and mean values • PCA is sensitive to outliers • Wavelets provide compact data representation and improve recognition • Improvement shows no statistically significant difference in recognition for different wavelets • Need for a specialized transformation
Optimum Transformation: Haar Wavelet • Example
Optimum Transformation: Haar Wavelet • Repeat average and difference log2(n) times
Optimum Transformation: Haar Wavelet • Waveforminterpretation
Optimum Transformation: Haar Wavelet • Matrix interpretation • b=W*a where
Optimum Transformation: Haar Wavelet • Matrix interpretation for the class of signals B=W*A • where A is (n x m) input signal matrix • Selection of n best coefficients performed using the information index Bs1=S1*W*A • where S1 is (n x n*log2(n)) selection matrix
Optimum Transformation: Evolutionary Iterations • Iterating on the selected result Bs2=S2*W* Bs1 • where S2 is a selection matrix or Bs2=S2*W* S1*W* A • after k iterations Bsk= Sk*W* ... S2*W* S1*W* A • So, the optimized transformation matrix T= Sk*W* ... S2*W* S1*W • can be obtained from the Haar wavelet
Optimum Transformation: Evolutionary Iterations • Learning with the evolved features
Optimum Transformation: Evolutionary Iterations • Waveforminterpretation of T rows
Optimum Transformation: Evolutionary Iterations • Meanvalues and the evolved transformation Original Signals and the evolved transformation 1.5 1 0.5 Signal Value 0 -0.5 -1 -1.5 0 20 40 60 80 100 120 140 Bin Index
Two Class Training • Training on HRR signals 17o depression angle profiles of BMP2 and BTR60
Wavelet-Based Reconfigurable FPGA forClassification t window 8bit 8bit Sample # 1 1 Haar-Wavelet Transform N.N. input signal is recognized Sample # m k 8bit 8bit Note: k m
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 (0+1)/2 (0+1)/2 (0+1)/2 Block Diagram of The Parallel Architecture
2 4 R: register using IOBs R: register using CLBs R R 3 1 0 2 A: registered Average D: registered difference (0-1) (2-3) (0-1) (0+1)/2 (0+1)/2 (2+3)/2 A R D R A R D R A R D R A D A D A D A D Simplified Block Diagram of The Serial Architecture First the Blue Second the Green
Data In RAM 16x8 RAM 16x8 RAM 16x8 RAM 16x8 RAM 16x8 PE PE PE PE WA RA WA RA WA RA WA RA WA RA Done Start Control Control Control Control RAM-Based Wavelet
The Processing Element 10 211 X 9 65X 9 95X 208 10211 -8 9XX 0 0 1 0 1 8 10211
Results: For One Iteration of Haar Wavelet • For 8 samples: • Parallel arch.: 120 CLBs, 128 IOBs, 58ns. • Serial arch. : 98 CLBs*, 72 IOBs, 148ns*. Parallel Arch. wins for larger number of samples. • For 16 samples: • Parallel arch.: 320 CLBs, 256 IOBs, 233ns. • RAM-Based arch.: 136 CLBs, 16 IOBs, ~ 1s. RAM-Based Arch. Wins since 1s is not so slow. ------------------------------------------------------------ * These values increase very fast when the # of samples increases, and the delay becomes extremely higher.
Reconfigurable Haar-Wavelet-Based Architecture PE PE PE PE ‘ Data
Test Results • Testing on HRR signals 15o depression angle profiles of BMP2 and BTR60 • With 15 features selected correct classification for BMP2 data is 69.3% and for BTR60 is 82.6% • Comparable results in SHARP Confusion Matrix for BMP2 data is 56.7% and for BTR60 is 67%
Problem Issues • BTR60 signals with 17o and 15o depression angles do not have compatible statistical distributions
Problem Issues • BMP2 and BTR60 signal distributions are not Gaussian
Work Completed • Information index and its properties • Multidimensional MC integration • Information as a measure of learning quality • Information error • Wavelets and their effect on pattern recognition • Haar wavelet as a linear matrix operator • Evolution of the Haar wavelet • Statistical support for classification
Recommendations and Future Work • Training Data must represent a statistical sample of all signals not a hand picked subset • Probability density functions will be approximated using parametric or NN approach • Information measure will be extended to k-class problems • Training and test will be performed on 12 class data • Dynamic clustering will prepare decision tree structure • Hybrid, evolutionary classifier will be developed