1 / 14

Tool Benchmarking Where are we?

This article discusses the methodology and importance of tool benchmarking to assess productivity, predict design time, monitor throughput, calibrate and tune flow, and estimate settings for future runs. It also highlights the shortcomings in current benchmarking practices and proposes a better approach using experimental design.

melodya
Download Presentation

Tool Benchmarking Where are we?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tool BenchmarkingWhere are we? Justin E. Harlow III Semiconductor Research Corporation April 9, 2001

  2. Metrics and Benchmarks:A Proposed Taxonomy • Methodology Benchmarking • Assessment of productivity • Prediction of design time • Monitoring of throughput • Flow Calibration and Tuning • Monitor active tool and flow performance • Correlate performance with adjustable parameters • Estimate settings for future runs • Tool Benchmarking • Measure tool performance against a standard • Compare performance of tools against each other • Measure progress in algorithm development

  3. My Tool Your Tool How It’s Typically Done... The Job

  4. ICCAD 2000: Typical Results

  5. Predictive Value?Kind of…. • It takes more time to detect more faults • But sometimes it doesn’t...

  6. Bigger Benchmarks Take Longer • Sometimes... S526:451 detects, 1740 sec S641:404 detects, 2 sec

  7. What’s Wrong with the way we do it today? • Results are not predictive • Results are often not repeatable • Benchmark sets have unknown properties • Comparisons are inconclusive

  8. A Better Way?Design of Experiments • Critical properties of equivalence class: • “sufficient” uniformity • “sufficient” size to allow for t-test or similar

  9. Example: Tool Comparison • Scalable circuits with known complexity properties • Observed differences are statistically significant

  10. Canonical Reference on DoE Tool Benchmark Methodology • D. Ghosh. Generation of Tightly Controlled Equivalence Classes for Experimental Design of Heuristics for Graph-Based NP-hard Problems. PhD thesis, Electrical and Computer Engineering, North Carolina State University, Raleigh, N.C., May 2000. Also available at http://www.cbl.ncsu.edu/publications/#2000-Thesis-PhD-Ghosh.

  11. Tool Benchmark Sets • ISCAS 85, 89, MCNC workshops etc. • ISPD98 Circuit Partitioning Benchmarks • ITC Benchmarks • Texas Formal Verification Benchmarks • NCSU Collaborative Benchmarking Lab

  12. “Large Design Examples” • CMU DSP Vertical Benchmark project. • The Manchester STEED Project • The Hamburg VHDL Archive • Wolfgang Mueller's VHDL collection • Sun Microsystems Community Source program • OpenCores.org • Free Model Foundry • ….

  13. Summary • There are a lot of different activities that we loosely call “benchmarking” • At the tool level, we don’t do a very good job • Better methods are emerging, but • Good Experimental Design is a LOT of work • You have to deeply understand the properties that are important and design the experimental data • Most of the design examples out there are not of much use for tool benchmarking

  14. To Find Out More... Advanced Benchmark Web Site http://www.eda.org/benchmrk Nope… There’s no “a” in there Talk to Steve “8.3” Grout

More Related