190 likes | 349 Views
Fixed-Point Model. Speaker: Team 5 黃柏森 趙敏安 Mentor : 陳圓覺 Adviser: Prof. An-Yeu Wu Date: 200/12/13. Outline. Architecture Matlab Code Simulation Result Analysis RTL Code. Architecture. 64-point R2 2 SDF (radix-2 2 single-path delay feedback) Quantization for fixed-point simulation
E N D
Fixed-Point Model Speaker: Team 5 黃柏森 趙敏安 Mentor : 陳圓覺 Adviser: Prof.An-Yeu WuDate: 200/12/13
Outline • Architecture • Matlab Code • Simulation Result • Analysis • RTL Code
Architecture • 64-point R22SDF (radix-22 single-path delay feedback) • Quantization for fixed-point simulation • Input quantization • BF saturation • Twiddle Factor quantization • Multiplication truncation 1 2 2 4 2 2 4 2 2 3 3
Floating-point C Model 1.Twiddle Factor2.Connect circuit line 3.ReuseBF1 BF24.Shift register • Structure: • Flip-flop registers cannot save floating-point data. • Thus a quantization operation is necessary. W3=[exp(-i*4*pi/N) exp(-i*2*pi/N) exp(-i*6*pi/N) 1]; tmpw3(clk+1)=tmp3(1)*(W3( mod (fix(clk/4),4 ) +1 ) )^ (mod(clk,4)) tmp2=BF1(reg2(mod(clk,2)+1),tmpw3(clk+1),count(2)) reg2(mod(clk,2)+1)=tmp2(2)
Algorithm for Quantization Ex. sign_bit=1, bit_int=2, bit_fra=2 • in_adj := in * (2^bit_fra) • Shift “.” right by bit_fra digits • If in_adj < lower boundary,→in_adj := lower boundaryIf in_adj > upper boundary,→in_adj := upper boundaryElse, in_adj := [in_adj] • [x] means the largest integer ≥ x • out := in_adj / (2^bit_fra) • Shift “.” left by bit_fra digits (1) in = 3.66 = 011.101…*2^2→14.64 = 01110.1…. [14.64] = 14 = 01110/2^2 → 3.50 = 011.10 (2) in = -3.66 = 100.010…*2^2→ -14.64 = 10001.0….[-14.64] = -15 = 10001/2^2 → -3.75 = 100.01 (3) in = 4.00 = 0100, overflow*2^2→ 16 = overflow→ 15 = 01111/2^2 → 3.75 = 011.11
Quantization of Input & Twiddle Factors Input generation Twiddle factor arrays Quantize input signals Quantize each element of twiddle factors Complex version of fixed-point quantization function
Saturation and Truncation • Output Saturation of BF2i&BF2ii modules • Truncation of Multiplications Saturate each element while sending them out Integer part bit number increases by 1 after each addition made Immediately truncate the result of the multiplication Slight saturation may occur
Determine the Integer Part • Determination for fraction part is meaningless before that for integer part in_int = 2 is enough for a -4~4 input out_int = 7 is enough for almost all possible outputs, but serious errors may occur if output > 64!! out_int = 8 is enough for possible -128~128 output
Determine the Fraction Part (1) • Set input fraction part. • observe the trade off between twiddle factor fraction and output fraction. • Local optimal point:note as fra. pt.of (in, out, tw) • (7,7,10) • (8,7,12) • (8,8,10) • (9,7,11) • (9,8,10) • (10,8,10) • (11,8,11) • (11,10,10)
Determine the Fraction Part (2) • Set twiddle factor fraction part. • observe the trade off between input fraction and output fraction. • Local optimal point:note as fra. pt.of (in, out, tw) • (8,9,9) • (10,8,9) • (7,8,10) • (7,7,11)
Analysis • Integer part surely is int(in, out, tw)=(2,8,0) • Fraction part should be further calculated. • By estimation from the architecture, • twiddle factors have repeated and complementary values • shift-register is about 3 times as twiddle factors • output fraction part is the most critical.
Analysis (con’t) Optimal set using least bit numbers
RTL Structure sequential combinational
RTL Structure (con’t) • Put integer and fractional part in one register array • Parameterize the size • Generate twiddle factors numerically by Matlab, then write into Verilog code to generate a read-only numeral table
Conclusions • Integer part of (in, out, tw): (2,8,0) • Fraction part of (in, out, tw): (7,7,10) • Bit numbers of (in, out, tw): (10,16,11) • This set uses the least bit numbers and almost always meets SQNR≥50 • RTL code is finished, and parameters can be tuned • Some points may be fault
Future Work • Debug and test • Optimize the algorithm • Tune the set of bit numbers subject to the RTL code and the result of synthesis • Synthesis to gate level
Q & A • Thanks for your attention!