220 likes | 366 Views
Bit-True Modeling of Neural Network SILab presentation Ali Ahmadi June 2007. Outline. Introduction structures of Neural Network Hopfield LAM BAM Bit-True Arithmetic Training modes for NN hardware Bit-True model of networks Simulation results. Hopfield Network. Single layer
E N D
Bit-True Modeling of Neural Network SILab presentation Ali Ahmadi June 2007
Outline • Introduction structures of Neural Network • Hopfield • LAM • BAM • Bit-True Arithmetic • Training modes for NN hardware • Bit-True model of networks • Simulation results
Hopfield Network • Single layer • Fully connected [1]
Weight calculation Tji = Tij = if i ≠ j; Tii = 0; where api is ith element of pth pattern • Updating Neuron for input neuron U Sj = then uj = [1]
LAM (Linear Associative Memory) Network single-layer feed-forward network recover the output pattern from full or partial information in the input pattern [1]
Weight calculation Wij = (( 2* aim -1)( 2* bjm -1)) where aim is ith element of mth pattern Threshold values are Ti= • Output calculation For input pattern b output pattern is a Ui = then ai =
BAM (Bidirectional Associative Memory) Network bidirectional Two layer with different dimension For each pattern we have pair (a, b) related to each layer
In X to Y pass W In Y to X pass WT • Weight calculation Wij = • Output calculation In forward pass input of jth neuron in layer Y is: Net y(j) = then yj = In backward pass input of jth neuron in layer X is: Net x(j) = then xj = [1]
Bit-True Arithmetic • SUM Inputs are 2’s complementwith length (WL-1) output is 2’s complement with length WL If inputs haven't same sign, based on carry we make Sign-Extension
Bit-True Arithmetic • Multiply Inputs are 2’s complementwith length WL output is 2’s complement with length WL
Training modes for Neural Network hardware • Off-chip learning: training process is performed out of chip with high precision, forward propagation pass in the recall phase is performed on-chip. • Chip-in-the-loop learning: chipisused during training but only in forward propagation. • On-chip learning: training is done entirely on-chip, sensitive to the use of limited precision weights.
Bit-true Model of Hopfield • Part of code in updating neuron sum+=t[j][i] * neuron[i]; // high precision arithmetic // arithmetic with finite word-length b 1 = t[j][i]; a1 = neuron[i]; a = Decimal2TwosComplement(a1, Word-length-1); b = Decimal2TwosComplement(b1, Word-length-1); c = MulBitTruePrecise(a, b, Word-length-1); s = Decimal2TwosComplement(sum, Word-length-1); s1 = SumBitTruePrecise(s, c, Word-length-1); sum = TwosComplement2Decimal(s1 , Word-length);
Bit-true Model of LAM • Part of code that calculate value of output neuron for an input pattern (propagation) RawOutVect[i]+=W[i][j] * inVect[j]; // high precision arithmetic // arithmetic with finite word-length b1= W[i][j]; a1 = inVect[j]; b = Decimal2TwosComplement(b1, Word-length-1); a = Decimal2TwosComplement(a1, Word-length-1); c = mulBitTruePrecise(a, b, Word-length-1) s = Decimal2TwosComplement(RawOutVect[i], Word-length-1); s1 = SumBitTruePrecise(s, c, Word-length-1); RawOutVect[i] = TwosComplement2Decimal(s1 , Word-length
Bit-true Model of BAM //High precision Arithmetic Sum += To->Weight[i][j] * From->Output[j]; //Finit precision Arithmetic a1 = To->Weight[i][j] ; b1 = From->Output[j]; a = Decimal2TwosComplement(a1,Wordlength-1); b = Decimal2TwosComplement(b1,Wordlength-1); c = mulBitTruePrecise(a, b, Wordlength-1); s = Decimal2TwosComplement(Sum,Wordlength-1); s1 = SumBitTruePrecise(s, c, Wordlength-1); Sum = TwosComplement2Decimal(s1 , Wordlength );
Input pattern for train network Input test pattern Output pattern for different Word-Length 4 bit 6 bit 7 bit 8 bit 32 bit 5 bit Simulation result of Hopfield network
Input pattern for train network Input test patterns Output patterns for WL = 5 bit Output patterns for WL = 6 bit Output patterns for WL = 7 bit Output patterns for WL = 32 bit Simulation result of LAM network
input pattern for layer X input pattern for layer Y "TINA “ "6843726" "ANTJE“ "8034673" " LISA " "7260915" input test pattern "TANE " "ANTJE" "RISE "
Output for WL=32 bit TINA -> | TINA -> 6843726 ANTJE -> | ANTJE -> 8034673 LISA -> | LISA -> 7260915 6843726 -> | 6843726 -> TINA 8034673 -> | 8034673 -> ANTJE 7260915 -> | 7260915 -> LISA TANE -> | TINA -> 6843726 ANTJE -> | ANTJE -> 8034673 RISE -> | DIVA -> 6060737 Simulation result of BAM network
Output for WL=2 bit TINA @ -> | TINA @ -> FENHGKO? ANTJE@ -> | &165:? -> _+87&9)@ LISA @ -> | LISA @ -> FENHGKO? 6843726@ -> | 6843726@ -> ^L^;GI 8034673@ -> | 8034673@ -> &165:? 7260915@ -> | 7260915@ -> &165:? TANE @ -> | TANE @ -> FENHGKO? ANTJE@ -> | YNIJE@ -> H0(@=^/5 RISE @ -> | RISE @ -> "#.$6Z7, Simulation result of BAM network
Output for WL=3 bit TINA -> | TINA -> 8034673 ANTJE -> | TINA -> 8034673 LISA -> | TINA -> 8034673 6843726 -> | 6060737 -> DIVA 8034673 -> | 8034673 -> TINA 7260915 -> | 8034673 -> TINA TANE -> | TINA -> 8034673 ANTJE -> | +61>_? -> GOLKIHL? RISE -> | TINA -> 8034673 Simulation result of BAM network
Output for WL=8 bit TINA -> | TINA -> 6843726 ANTJE -> | ANTJE -> 8034673 LISA -> | LISA -> 7260915 6843726 -> | 6843726 -> TINA 8034673 -> | 8034673 -> ANTJE 7260915 -> | 7260915 -> LISA TANE -> | TINA -> 6843726 ANTJE -> | ANTJE -> 8034673 RISE -> | DIVA -> 6060737 Simulation result of BAM network
References [1] A.S. Pandya, “Pattern Recognition with Neural network using C++ ,” , 2nd ed. vol. 3, J. New York: IEEE PRESS. [2] p.Moerland, E. Fiesler “Neural Network Adaptation for Hardware Implementation”, Handbook of Neural Computation. JAN 97