1 / 76

ECE260B – CSE241A Winter 2005 Logic Synthesis

ECE260B – CSE241A Winter 2005 Logic Synthesis. Website: http://vlsicad.ucsd.edu/courses/ece260b-w05. Slides courtesy of Dr. Cho Moon. Introduction. Why logic synthesis? Ubiquitous – used almost everywhere VLSI is done

Download Presentation

ECE260B – CSE241A Winter 2005 Logic Synthesis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE260B – CSE241AWinter 2005Logic Synthesis Website: http://vlsicad.ucsd.edu/courses/ece260b-w05 Slides courtesy of Dr. Cho Moon

  2. Introduction • Why logic synthesis? • Ubiquitous – used almost everywhere VLSI is done • Body of useful and general techniques – same solutions can be used for different problems • Foundation for many applications such as • Formal verification • ATPG • Timing analysis • Sequential optimization

  3. a 0 d Library q 1 b s clk a 0 d q 1 b s clk RTL Design Flow HDL RTL Synthesis Manual Design Module Generators netlist Logic Synthesis netlist Physical Synthesis layout Slide courtesy of Devadas, et. al

  4. Logic Synthesis Problem • Given • Initial gate-level netlist • Design constraints • Input arrival times, output required times, power consumption, noise immunity, etc… • Target technology libraries • Produce • Smaller, faster or cooler gate-level netlist that meets constraints Very hard optimization problem!

  5. Library Library Combinational Logic Synthesis 2-level Logic opt netlist tech independent multilevel Logic opt Logic Synthesis tech dependent netlist Slide courtesy of Devadas, et. al

  6. Outline • Introduction • Two-level Logic Synthesis • Multi-level Logic Synthesis • Timing Optimization in Synthesis

  7. I1 I2 O1 O2 Two-level Logic Synthesis Problem • Given an arbitrary logic function in two-level form, produce a smaller representation. • For sum-of-products (SOP) implementation on PLAs, fewer product terms and fewer inputs to each product term mean smaller area. • F = A B + A B C • F = A B

  8. Boolean Functions f(x) : Bn B B = {0, 1}, x = (x1, x2, …, xn) • x1, x2, … are variables • x1, x1, x2, x2, … are literals • each vertex of Bn is mapped to 0 or 1 • the onset of f isa set of input values for which f(x) = 1 • the offset of f isa set of input values for which f(x) = 0

  9. Logic Functions: Slide courtesy of Devadas, et. al

  10. Cube Representation Slide courtesy of Devadas, et. al

  11. Sum-of-products (SOP) • A function can be represented by a sum of cubes (products): f = ab + ac + bc Since each cube is a product of literals, this is a “sum of products” representation • A SOP can be thought of as a set of cubes F F = {ab, ac, bc} = C • A set of cubes that represents f is called a cover of f. F={ab, ac, bc} is a cover of f = ab + ac + bc.

  12. Prime Cover • A cube is prime if there is no other cube that contains it (for example, b c is not a prime but b is) • A cover is prime iff all of its cubes are prime c b a

  13. Irredundant Cube • A cube of a cover C is irredundant if C fails to be a cover if c is dropped from C • A cover is irredundant iff all its cubes are irredudant (for exmaple, F = a b + a c + b c) c b Not covered a

  14. Quine-McCluskey Method • We want to find a minimum prime and irredundant cover for a given function. • Prime cover leads to min number of inputs to each product term. • Min irredundant cover leads to min number of product terms. • Quine-McCluskey (QM) method (1960’s) finds a minimum prime and irredundant cover. • Step 1: List all minterms of on-set: O(2^n) n = #inputs • Step 2: Find all primes: O(3^n) n = #inputs • Step 3: Construct minterms vs primes table • Step 4: Find a min set of primes that covers all the minterms: O(2^m) m = #primes

  15. QM Example (Step 1) • F = a’ b’ c’ + a b’ c’ + a b’ c + a b c + a’ b c • List all on-set minterms

  16. QM Example (Step 2) • F = a’ b’ c’ + a b’ c’ + a b’ c + a b c + a’ b c • Find all primes.

  17. QM Example (Step 3) • F = a’ b’ c’ + a b’ c’ + a b’ c + a b c + a’ b c • Construct minterms vs primes table (prime implicant table) by determining which cube is contained in which prime. X at row i, colum j means that cube in row i is contained by prime in column j.

  18. QM Example (Step 4) • F = a’ b’ c’ + a b’ c’ + a b’ c + a b c + a’ b c • Find a minimum set of primes that covers all the minterms “Minimum column covering problem” Essential primes

  19. ESPRESSO – Heuristic Minimizer • Quine-McCluskey gives a minimum solution but is only good for functions with small number of inputs (< 10) • ESPRESSO is a heuristic two-level minimizer that finds a “minimal” solution ESPRESSO(F) { do { reduce(F); expand(F); irredundant(F); } while (fewer terms in F); verfiy(F); }

  20. Expand Irredundant ESPRESSO ILLUSTRATED Reduce

  21. Outline • Introduction • Two-level Logic Synthesis • Multi-level Logic Synthesis • Timing optimization in Synthesis

  22. Multi-level Logic Synthesis • Two-level logic synthesis is effective and mature • Two-level logic synthesis is directly applicable to PLAs and PLDs But… • There are many functions that are too expensive to implement in two-level forms (too many product terms!) • Two-level implementation constrains layout (AND-plane, OR-plane) • Rule of thumb: • Two-level logic is good for control logic • Multi-level logic is good for datapath or random logic

  23. Two-Level (PLA) vs. Multi-Level PLA control logic constrained layout highly automatic technology independent multi-valued logic slower? input, output, state encoding Multi-level all logic general automatic partially technology independent coming can be high speed some results

  24. Multi-level Logic Synthesis Problem • Given • Initial Boolean network • Design constraints • Arrival times, required times, power consumption, noise immunity, etc… • Target technology libraries • Produce • a minimum area netlist consisting of the gates from the target libraries such that design constraints are satisfied

  25. Modern Approach to Logic Optimization • Divide logic optimization into two subproblems: • Technology-independent optimization • determine overall logic structure • estimate costs (mostly) independent of technology • simplified cost modeling • Technology-dependent optimization (technology mapping) • binding onto the gates in the library • detailed technology-specific cost model • Orchestration of various optimization/transformation techniques for each subproblem Slide courtesy of Keutzer

  26. Optimization Cost Criteria The accepted optimization criteria for multi-level logic are to minimize some function of: • Areaoccupied by the logic gates and interconnect (approximated by literals=transistors in technology independent optimization) • Critical path delayof the longest path through the logic • Degree of testability of the circuit, measured in terms of the percentage of faults covered by a specified set of test vectors for an approximate fault model (e.g. single or multiple stuck-at faults) • Power consumed by the logic gates • Noise Immunity • Wireability while simultaneously satisfying upper or lower bound constraints placed on these physical quantities

  27. Representation: Boolean Network Boolean network: • directed acyclic graph (DAG) • node logic function representation fj(x,y) • node variableyj: yj= fj(x,y) • edge (i,j)if fjdepends explicitly on yi Inputs x = (x1, x2,…,xn ) Outputs z = (z1, z2,…,zp ) Slide courtesy of Brayton

  28. Network Representation Booleannetwork:

  29. Node Representation: Sum of Products (SOP) Example: abc’+a’bd+b’d’+b’e’f (sum of cubes) Advantages: • easy to manipulate and minimize • many algorithms available (e.g. AND, OR, TAUTOLOGY) • two-level theory applies Disadvantages: • Not representative of logic complexity. For example f=ad+ae+bd+be+cd+ce f’=a’b’c’+d’e’ These differ in their implementation by an inverter. • hence not easy to estimate logic; difficult to estimate progress during logic manipulation

  30. Factored Forms Example:(ad+b’c)(c+d’(e+ac’))+(d+e)fg Advantages • good representative of logic complexity f=ad+ae+bd+be+cd+ce f’=a’b’c’+d’e’  f=(a+b+c)(d+e) • in many designs (e.g. complex gate CMOS) the implementation of a function corresponds directly to its factored form • good estimator of logic implementation complexity • doesn’t blowup easily Disadvantages • not as many algorithms available for manipulation • hence usually just convert into SOP before manipulation

  31. Factored Forms Note: literal count  transistor count  area (however, area also depends on wiring)

  32. Factored Forms Definition :a factored form can be defined recursively by the following rules. A factored form is either a product or sum where: • a product is either a single literal or product of factored forms • a sum is either a single literal or sum of factored forms A factored form is a parenthesized algebraic expression. In effect a factored form is a product of sums of products … or a sum of products of sums … Any logic function can be represented by a factored form, and any factored form is a representation of some logic function.

  33. Factored Forms When measured in terms of number of inputs, there are functions whose size is exponential in sum of products representation, but polynomial in factored form. Example: Achilles’ heel function There are n literals in the factored form and (n/2)2n/2 literals in the SOP form. Factored forms are useful in estimating area and delay in a multi-level synthesis and optimization system. In many design styles (e.g. complex gate CMOS design) the implementation of a function corresponds directly to its factored form.

  34. Factored Forms Factored forms cam be graphically represented as labeled trees, called factoring trees, in which each internal node including the root is labeled with either +or, and each leaf has a label of either a variable or its complement. Example:factoring tree of ((a’+b)cd+e)(a+b’)+e’

  35. Reduced Ordered BDDs • like factored form, represents both function and complement • like network of muxes, but restricted since controlled by primaryinput variables • not really a good estimator for implementation complexity • given an ordering, reduced BDD is canonical, hence a good replacement for truth tables • for a good ordering, BDDs remain reasonably small for complicated functions (e.g. not multipliers) • manipulations are well defined and efficient • true support (dependency) is displayed

  36. Technology-Independent Optimization • Technology-independent optimization is a bag of tricks: • Two-level minimization (also called simplify) • Constant propagation (also called sweep) f = a b + c; b = 1 => f = a + c • Decomposition (single function) f = abc+abd+a’c’d’+b’c’d’ => f = xy + x’y’; x = ab ; y = c+d • Extraction (multiple functions) f = (az+bz’)cd+e g = (az+bz’)e’ h = cde  f = xy+e g = xe’ h = ye x = az+bz’ y = cd

  37. More Technology-Independent Optimization • More technology-independent optimization tricks: • Substitution g = a+b f = a+bc  f = g(a+c) • Collapsing (also called elimination) f = ga+g’b g = c+d  f = ac+ad+bc’d’ g = c+d • Factoring (series-parallel decomposition) f = ac+ad+bc+bd+e => f = (a+b)(c+d)+e

  38. Summary of Typical Recipe for TI Optimization • Propagate constants • Simplify: two-level minimization at Boolean network node • Decomposition • Local “Boolean” optimizations • Boolean techniques exploit Boolean identities (e.g., a a’ = 0) Consider f = a b’ + a c’ + b a’ + b c’ + c a’ + c b’ • Algebraic factorization procedures f = a (b’ + c’) + a’ (b + c) + b c’ + c b’ • Boolean factorization procedures f = (a + b + c) (a’ + b’ + c’) Slide courtesy of Keutzer

  39. Technology-Dependent Optimization Technology-dependent optimization consists of • Technology mapping: maps Boolean network to a set of gates from technology libraries • Local transformations • Discrete resizing • Cloning • Fanout optimization (buffering) • Logic restructuring Slide courtesy of Keutzer

  40. Technology Mapping Input • Technology independent, optimized logic network • Description of the gates in the library with their cost Output • Netlist of gates (from library) which minimizes total cost General Approach • Construct a subject DAG for the network • Represent each gate in the target library by pattern DAG’s • Find an optimal-cost covering of subject DAG using the collection of pattern DAG’s • Canonical form: 2-input NAND gates and inverters

  41. DAG Covering • DAG covering is an NP-hard problem • Solve the sub-problem optimally • Partition DAG into a forest of trees • Solve each tree optimally using tree covering • Stitch trees back together Slide courtesy of Keutzer

  42. Tree Covering Algorithm • Transform netlist and libraries into canonical forms • 2-input NANDs and inverters • Visit each node in BFS from inputs to outputs • Find all candidate matches at each node N • Match is found by comparing topology only (no need to compare functions) • Find the optimal match at N by computing the new cost • New cost = cost of match at node N + sum of costs for matches at children of N • Store the optimal match at node N with cost • Optimal solution is guaranteed if cost is area • Complexity = O(n) where n is the number of nodes in netlist

  43. Tree Covering Example Find an ``optimal’’ (in area, delay, power) mapping of this circuit into the technology library (simple example below) Slide courtesy of Keutzer

  44. Elements of a library - 1 Element/Area Cost Tree Representation (normal form) INVERTER 2 NAND2 3 NAND3 4 NAND4 5 Slide courtesy of Keutzer

  45. Trivial Covering subject DAG 7 NAND2 (3) = 21 5 INV (2) = 10 Area cost 31 Can we do better with tree covering? Slide courtesy of Keutzer

  46. Optimal tree covering - 1 3 2 2 3 ``subject tree’’ Slide courtesy of Keutzer

  47. Optimal tree covering - 2 3 8 2 2 5 3 ``subject tree’’ Slide courtesy of Keutzer

  48. Optimal tree covering - 3 3 8 13 2 2 5 3 ``subject tree’’ Cover with ND2 or ND3 ? 1 NAND2 3 + subtree 5 1 NAND3 = 4 Area cost 8 Slide courtesy of Keutzer

  49. Optimal tree covering – 3b 3 8 13 2 2 4 5 3 ``subject tree’’ Label the root of the sub-tree with optimal match and cost Slide courtesy of Keutzer

  50. Optimal tree covering - 4 Cover with INV or AO21 ? 3 8 13 2 2 ``subject tree’’ 2 5 4 1 AO21 4 + subtree 1 3 + subtree 2 2 1 Inverter 2 + subtree 13 Area cost 9 Area cost 15 Slide courtesy of Keutzer

More Related