1 / 23

A Semi-Persistent Clustering Technique for VLSI Circuit Placement

A Semi-Persistent Clustering Technique for VLSI Circuit Placement. Charles J. Alpert 1 , Andrew Kahng 2 , Gi-Joon Nam 1 , Sherief Reda 2 and Paul G. Villarrubia 1 1 IBM Corp. 2 Department of CSE, UCSD. bigblue4 design from ISPD2005 Suite. Implications in Placement. Scalability Tractability

Download Presentation

A Semi-Persistent Clustering Technique for VLSI Circuit Placement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Semi-Persistent Clustering Technique for VLSI Circuit Placement Charles J. Alpert1, Andrew Kahng2, Gi-Joon Nam1, Sherief Reda2 and Paul G. Villarrubia1 1IBM Corp. 2Department of CSE, UCSD

  2. bigblue4 design from ISPD2005 Suite

  3. Implications in Placement • Scalability • Tractability • Runtime vs. quality trade-off • SoC (System-on-Chip) designs • Mixed-size objects • White space

  4. Problem Statement • What is the most effective and efficient clustering strategy for analytic placement? • Quality of solution • CPU time

  5. B B Cluster A with its “closest neighbor” B C C Update the circuit netlist D D D AC A A E E E F F F  wij conn(u,v) Clustering Score Function: d(u, v) = [ size(u) + size(v) ]k Clustering Concept

  6. Clustering Literature • Tremendous amounts of research here • Edge-Coarsening (EC) • First-Choice (FC) • Edge-Separability (ESC) • Peak-Clustering • Etc… • General drawbacks • Clique transformation • Edge weight discrepancy • Pass-based iteration • Lack of global clustering view

  7. Best-Choice Clustering • Avoid clique transformation • Avoid pass-based iterations • More global view of clustering sequence • Priority-queue management • Lazy-update speed-up technique • Area-controlled balanced clustering

  8. Best-Choice Clustering • Initialize the priority-queue PQ: • - For each cell u: calculate its clustering score c with its closest neighbor v. • - Insert the pair (u, v) into PQ based on their cost c. • Until the target cell number is reached: • - Pick the top of the heap (m, n) • - Cluster (m, n) into a new object mn; update the netlist • - Calculate mn closest neighbor k; insert (mn, k) into PQ • - Recalculate the clustering cost of all the neighbors to m and n

  9. Assume N-pin net weight = 1 / (n-1) Each object size = 1 Timing criticality is 1 for all nets B C A D E F Best-Choice Example

  10. A=1/2 CD=2/3 B B D=1 C A A B=1/2 B=1/2 CD D C=1 B=2/3 E E D=3/4 F F F=1/2 D=1/2 E=1/2 Best-Choice Example

  11. A=3/8 A=3/8 BCD=3/8 BCD BCD BDC=3/8 A A E F=1/2 EF F E=1/2 BCD=3/10 Best-Choice Example

  12. ABCD EF Best-Choice Example ABCDEF EF=1/3 ABCD=1/3  clustering_score = 2.875

  13. Best-Choice Clustering Summary • Globally optimal clustering sequence via priority-queue data structure • Produce better quality of results • Clustering framework • Arbitrary clustering score function can be plugged in

  14. (1) (2) Best-Choice Clustering • Clustering score distribution • First-choice (FC) :  clustering_score = 5612.83 • Best-choice (BC) :  clustering_score = 6671.53

  15. Lazy Update Speed-up Technique Priority Queue PQ Top of the PQ Node A • Observations: • Node A might be updated a number of times before making it to the top of the PQ (if ever), but the last update is what determines its final position in PQ • Statistics indicate than in 96% of our updating steps, updating node A score pushes A down in PQ

  16. Lazy Update Speed-up Technique Main Idea: Wait until A gets to the top of the priority-queue and then update its score if necessary Until the target cell number is reached: - Pick the top of the heap (m, n) - If (m, n) is invalid then - recalculate m closest neighbor n’ and insert (m, n’) in the heap else - Cluster (m, n) into a new object mn; update the netlist - Calculate mn closest neighbor k; insert (mn, k) in the heap - Mark all neighbors of m and n invalid

  17. Lazy Update Runtime Charateristic Note: Practically no impact to solution quality

  18. Experiments • IBM CPLACE • Analytic placement algorithm • Semi-persistent clustering paradigm • Up-front clustering • Selective unclustering during main global placement • Full unclustering before detailed placement • Order-of-magnitude reduction by clustering • Industrial ASIC designs • Size ranges from 56K to 880K placeable objects

  19. Placement Results w/ Clustering • Average 4.3% WL improvement over EC • BC is x8.76 slower than EC

  20. No Clustering vs. BC+Lazy Clustering

  21. Conclusions • Globally optimal clustering sequence framework • Independent of clustering scoring function • Better clustering sequence • Allow significant placement speed-up • Almost no loss of quality of solution • Size control via clustering scoring function • Effective for dense design

  22. Future Work • Handling fixed blocks during clustering • Ignoring nets connected to fixed objects • Ignoring pins connected to fixed objects • Including fixed blocks during clustering • Etc…. • No visible improvement at the moment

  23.  wij conn(u,v) • d(u, v) = [ size(u) + size(v) ]k Cluster Size Control Results Standard : k = 1 Automatic: k = size(u) + size(v) /  where  = expected avg. size

More Related