1 / 16

Constructive Benchmarking for Placement

Constructive Benchmarking for Placement. Igor L. Markov. Saurabh N. Adya. David A. Papa. EECS Department University of Michigan Ann Arbor, MI 48109 imarkov@eecs.umich.edu. EECS Department University of Michigan Ann Arbor, MI 48109 sadya@eecs.umich.edu. EECS Department

leif
Download Presentation

Constructive Benchmarking for Placement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Constructive Benchmarking for Placement Igor L. Markov Saurabh N. Adya David A. Papa EECS Department University of Michigan Ann Arbor, MI 48109 imarkov@eecs.umich.edu EECS Department University of Michigan Ann Arbor, MI 48109 sadya@eecs.umich.edu EECS Department University of Michigan Ann Arbor, MI 48109 iamyou@eecs.umich.edu Advanced Computer Architecure Laboratory at The University of Michigan

  2. Need New Benchmarks • Drawbacks of existing benchmarks • Placers are tuned to individual benchmarks • S. N. Adya et al., ``Benchmarking for Large-Scale VLSI Placement and Beyond,'' to appear in IEEE Trans. on CAD, April 2004. • Dragon – IBM-DRAGON - Capo – Cadence benchmarks • FengShui – MCNC - mPL – PEKO • Current benchmarks are large and difficult to interpret • No clear way to improve placers • Desirable features for new benchmarks • Want scalable artificial benchmarks with realistic features • In addition to current benchmarks, not instead of • Tailored tests for specific features • Abstraction of features in real netlists • Want results that can be visually interpreted

  3. Isolation of Key Features • Cluster tightly connected components • Ignore intra-cluster nets • Merge inter-cluster nets • Model with edge weights • Remove nets with negligible weight • Remove disconnected components • Benchmarks identify features that placers will encounter • Features form a necessary but not sufficient set Step 1 Step 2 Step 3

  4. Basic PIO Example

  5. Placers • Capo8.7 • S. N. Adya et al., “On Whitespace and Stability in Mixed-Size Placement and Physical Synthesis,” ICCAD`03, pp. 311-318. • mPL2 • C-C. Chang, J. Cong, D. Pan, X. Yuan “Physical Hierarchy Generation with Routing Congestion Control,” ISPD`02. • mPL3 • T. F. Chan, J. Cong, T. Kong, J. R. Shinnerl and K. Sze, “An Enhanced Multilevel Algorithm for Circuit Placement,” ICCAD `03. • Dragon2.23 • M. Wang, X. Yang and M. Sarrafzadeh, “Dragon2000: Standard-cell Placement Tool for Large Industry Circuits,” ICCAD 2000, pp. 260-263. • Dragon3.01 • X. Yang, B.-K. Choi and M. Sarrafzadeh, “Routability Driven White Space Allocation for Fixed-Die Standard-Cell Placement,'' ISPD 2002, pp. 42-50. • FengShui2.1 • A. Agnihotri, M. C. Yildiz, A. Khatkhate, A. Mathur, S. Ono, P. H. Madden “Fractional Cut: Improved Recursive Bisection Placement,” ICCAD `03, pp. 307-310.

  6. Solutions of PIO

  7. Observations for PIO • Only Capo finds optimal solutions • Nice test cases for a detail placer • Detail placers may mask problems with global placers(turn off detail placers when debugging global placers) • All placements shown are legal • Capo was made optimal on PIO since publication • mPL3 forms columns • FengShui2.1 packs to the left • Dragon3.01 packs somewhat to the right

  8. Other Benchmark Types

  9. Effects of Left Packing

  10. Effects of Building Columns

  11. BlobObstacle Effects

  12. Improvements to Capo • Better whitespace distribution • When whitespace is large, no need for uniform distribution • Optimal placement of single cells • Performed during end-case placement • Partitioning bugs found and removed • Affected wirelength optimization • Turned on legalizer by default • Was accidentally off by default, and overlooked

  13. Bugs in Other Placers • Some placers found to place cells very far from core area • mPL2 was unable to read certain inputs • Several placers halt in the presence of obstacles • The fixed die option of Dragon2.23 did not run • A number of off-by-one errors in Domino and FengShui • Several bug reports were made to authors of these tools • In some cases quick fixes were made

  14. Asymptotic Suboptimality

  15. Asymptotic Suboptimality

  16. Conclusions • Proposed new benchmarks • To be used in addition to, not instead of existing benchmarks • Unexpected new benchmarks expose unseen problems in algos & tools • Emphasizing individual features of realistic designs • Can easily visualize these benchmarks • All optimal solutions known in most cases (global effect in ASICs?) • Difference between optimal and actual can be seen and studied • Know what goes wrong (e.g., don’t pack to the left) • Having a unique solution seems to kill annealing (true for datapaths?) • Often can see ways to improve placers (non-uniform whitespace distr.) • Asymptotically suboptimal placements are a serious problem • Is your placer asymptotically suboptimal?  • Benchmarks are available at: • http://vlsicad.eecs.umich.edu/BK/FEATURE/ • Bookshelf, LEF/DEF, Cpin and Spc formats available

More Related