620 likes | 796 Views
The Optimization of High-Performance Digital Circuits. Andrew Conn (with Michael Henderson and Chandu Visweswariah) IBM Thomas J. Watson Research Center Yorktown Heights, NY. Outline. Circuit optimization. Transistor and wire sizes. Nonlinear optimizer. Function and gradient values.
E N D
The Optimization of High-Performance Digital Circuits Andrew Conn (with Michael Henderson and Chandu Visweswariah) IBM Thomas J. Watson Research Center Yorktown Heights, NY
Transistor and wire sizes Nonlinear optimizer Function and gradient values Simulator Transistor and wire sizes Nonlinear optimizer Static timing analyzer Function and gradient values Dynamic vs. static optimization
Transistor and wire sizes Nonlinear optimizer LANCELOT Function and gradient values Static transistor- level timer EinsTLT Embedded time- domain simulator SPECS EinsTuner: formal static optimizer
1 4 Read netlist; create timing graph (EinsTLT) Formulate pruned optimization problem Feed problem to nonlinear optimizer (LANCELOT) Snap-to-grid; back-annotate; re-time 3 Solve optimization problem, call simulator for delays/slews and gradients thereof Obtain converged solution Fast simulation and incremental sensitivity computation (SPECS) 2 Components of EinsTuner
Red=critical Green=non-critical Curvature=sensitivity Thickness=transistor size Delay Logic stages Gate PIs by criticality Wire Algorithm animation: inv3 • One such frame per iteration
SPECS: fast simulation • Two orders of magnitude faster than SPICE • 5% typical stage delay and slew accuracy; 20% worst-case • Event-driven algorithm • Simplified device models • Specialized integration methods • Invoked via a programming interface • Accurate gradients indispensable
LANCELOT algorithms Uses augmented Lagrangian for nonlinear constraints (x,) = f(x) + [ici(x) + ci(x)2 /2] Simple bounds handled explicitly Adds slacks to inequalities Trust region method
Trust-region Simple bounds LANCELOT algorithms continued
Customization of LANCELOT • Cannot just use as a black box • Non-standard options may be preferable • eg Solve the BQP subproblem accurately • Magic Steps • Noise considerations • Structured Secant Updates • Adjoint computations • Preprocessing (Pruning) • Failure recovery in conjunction with SPECS
LANCELOT • State-of-the-art large-scale nonlinear optimization package • Group partial separability is heavily exploited in our formulation • Two-step updates applied to linear variables • Specialized criteria for initializations, updates, adjoint computations, stopping and dealing with numerical noise
Aids to convergence • Initialization of multipliers and variables • Scaling, choice of units • Choice of simple bounds on arrival times, z • Reduction of numerical noise • Reduction of dimensionality • Treating fanout capacitances as“internal variables” of the optimization • Tuning of LANCELOT to be aggressive • Accurate solution of BQP
Degeneracy! Demonstration of degeneracy
Pruning of the timing graph • The timing graph can be manipulated • to reduce the number of arrival time variables • to reduce the number of timing constraints • most of all, to reduce degeneracy • No loss in generality or accuracy • Bottom line: average 18.3xAT variables,33% variables, 43% timing constraints, 22% constraints, 1.7x to 4.1xin run time on large problems
Pruning strategy • During pruning, number of fanins of any un-pruned node monotonically increases • During pruning, number of fanouts of any un-pruned node monotonically increases • Therefore, if a node is not pruned in the first pass, it will never be pruned • Therefore, a one-pass algorithm can be used for a given pruning criterion
Pruning strategy • The order of pruning provably produces different (possibly sub-optimal) results • Greedy 3-pass pruning produces a “very good” (but perhaps non-optimal) result • We have not been able to demonstrate a better result than greedy 3-pass pruning • However, the quest for a provably optimal solution continues...
1 2 3 4 Pruning: an example
5 1 4 2 6 3 Block-based Path-based Block-based vs. path-based timing
1 1 5 5 4 2 2 6 6 3 3 Block-based & path-based timing • In timing graph, if node has n fanins, m fanouts, eliminating it causes 2mn constraints instead of 2 (m+n) • Criterion: if 2mn 2(m+n)+2, prune!
1 7 2 9 11 3 14 15 12 4 16 5 10 13 8 6 Detailed pruning example
Edges = 26 Nodes = 16 (+2) Score Card Detailed pruning example 1 7 9 11 14 2 3 15 12 Sink Source 4 8 10 13 16 5 6
Edges = 26 20 Nodes = 16 10 Score Card 1 2 3 4 5 6 Detailed pruning example 7 9 11 14 15 12 Sink Source 8 10 13 16
Edges = 20 17 Nodes = 10 7 Score Card 14 14 15 16 Detailed pruning example 7 9 11 1 2 12 Sink 3 Source 4 5 8 10 13 6
Edges = 17 16 Nodes = 7 6 Score Card 1,7 2,7 3,7 Detailed pruning example 9 11 14 14 12 Sink Source 15 4 5 8 10 13 6 16
Edges = 16 15 Nodes = 6 5 Score Card 11,14 Detailed pruning example 9 1,7 2,7 14 12 Sink Source 3,7 15 4 5 8 10 13 6 16
Edges = 15 14 Nodes = 5 4 Score Card 13,16 Detailed pruning example 9 11,14 1,7 2,7 14 12 Sink Source 3,7 15 4 5 8 10 6
Edges = 14 13 Nodes = 4 3 Score Card 10 10,13,16 Detailed pruning example 9 11,14 1,7 2,7 14 12 Sink Source 3,7 15 4 5 8 6
Edges = 13 13 Nodes = 3 2 Score Card 12,14 12,15 10,12,14 10,12,15 Edges: 26 to 13 (2x) Nodes: 16 to 2 (8x) Detailed pruning example 9 11,14 1,7 2,7 Sink Source 3,7 4 5 8 6 10,13,16
Adjoint Lagrangian mode • gradient computation is the bottleneck • if the problem has m measurements and n tunable transistor/wire sizes: • traditional direct method: n sensitivity simulations • traditional adjoint method: m adjoint simulations • adjoint Lagrangian method computes all gradients in a single adjoint simulation!
Adjoint Lagrangian mode • useful for large circuits • implication: additional timing/noise constraints at no extra cost! • is predicated on close software integration between the optimizer and the simulator • gradient computation is 8% of total run timeon average
v area = c(x) NML t2 t1 t v ( x , t ) NM for all t in [ t , t ] 1 2 L Noise considerations • noise is important during tuning • semi-infinite problem
Noise considerations area = c(x) v • Trick: remap infinite number of constraints to a single integral constraint c(x) = 0 • In adjoint Lagrangian mode, any number of noise constraints almost for free! • General (constraints, objectives, minimax) • Tradeoff analysis for dynamic library cells NML t t1 t2
1/6 1/6 1/2 1/6 1/2 1/4 1/4 Initialization of s