210 likes | 222 Views
METRICS is a standardized infrastructure for collecting and storing design process information, with a list of design and process metrics, enabling analysis and reports for design process optimization.
E N D
METRICS:A System Architecture for Design Process Optimization Stephen Fenstermaker*, David George*, Andrew B. Kahng, Stefanus Mantik and Bart Thielges* UCLA CS Dept., Los Angeles, CA *OxSigen LLC, San Jose, CA
Motivations • How do we improve design productivity ? • Does our design technology / capability yield better productivity than it did last year ? • How do we formally capture best known methods, and how do we identify them in the first place ? • Does our design environment support continuous improvement of the design process ? • Does our design environment support what-if / exploratory design ? Does it have early predictors of success / failure? • Currently, there are no standards or infrastructure for measuring and recording the semiconductor design process
Purpose of METRICS • Standard infrastructure for the collection and the storage of design process information • Standard list of design metrics and process metrics • Analyses and reports that are useful for design process optimization METRICS allows: Collect, Data-Mine, Measure, Diagnose, then Improve
Related Works • OxSigen LLC (Siemens 97-99) • Enterprise- and project-level metrics (“normalized transistors”) Numetrics Management Systems DPMS • Other in-house data collection systems • e.g., TI (DAC 96 BOF) • Web-based design support • IPSymphony, WELD, VELA, etc. • E-commerce infrastructure • Toolwire, iAxess, etc. • Continuous process improvement • Data mining and visualization
Outline • Data collection process and potential benefits • METRICS system architecture • METRICS standards • Current implementation • Issues and conclusions
Potential Data Collection/Diagnoses • What happened within the tool as it ran? what was CPU/memory/solution quality? what were the key attributes of the instance? what iterations/branches were made, under what conditions? • What else was occurring in the project? spec revisions, constraint and netlist changes, … • User performs same operation repeatedly with nearly identical inputs • tool is not acting as expected • solution quality is poor, and knobs are being twiddled
Benefits • Benefits for project management • accurate resource prediction at any point in design cycle • up front estimates for people, time, technology, EDA licenses, IP re-use... • accurate project post-mortems • everything tracked - tools, flows, users, notes • no “loose”, random data left at project end • management console • web-based, status-at-a-glance of tools, designs and systems at any point in project • Benefits for tool R&D • feedback on the tool usage and parameters used • improve benchmarking
Outline • Data collection process and potential benefits • METRICS system architecture • METRICS standards • Current implementation • Issues and conclusions
Tool Tool Transmitter Java Applets wrapper Tool API Transmitter Transmitter XML Inter/Intra-net Web Server Data Mining DB Reporting Metrics Data Warehouse METRICS System Architecture
METRICS Performance • Transmitter • low CPU overhead • multi-threads / processes – non-blocking scheme • buffering – reduce number of transmissions • small memory footprint • limited buffer size • Reporting • web-based • platform and location independent • dynamic report generation • always up-to-date
donkey 2% rat 1% bull 2% 100 synthesis 20% ATPG 22% 98 postSyntTA 13% BA 8% 96 LVS % funcSim 7% hen 95% placedTA 7% 94 physical 18% LVS 5% 92 % aborted per task % aborted per machine 90 88 0 100 200 300 400 500 600 time LVS convergence Example Reports
Current Results • CPU_TIME = 12 + 0.027 NUM_CELLS (corr = 0.93) • More plots are accessible at • http://xenon.cs.ucla.edu:8080/metrics
Outline • Data collection process and potential benefits • METRICS system architecture • METRICS standards • Current implementation • Issues and conclusions
METRICS Standards • Standard metrics naming across tools • same name «same meaning, independent of tool supplier • generic metrics and tool-specific metrics • no more ad hoc, incomparable log files • Standard schema for metrics database • Standard middleware for database interface • For complete current lists see: http://vlsicad.cs.ucla.edu/GSRC/METRICS
Generic and Specific Tool Metrics Partial list of metrics now being collected in Oracle8i
Outline • Data collection process and potential benefits • METRICS system architecture • METRICS standards • Current implementation • Issues and conclusions
Capo Placer QP ECO WRoute Incr. WRoute CongestionAnalysis Testbed I: Metricized P&R Flow M E T R I C S DEF Placed DEF LEF Legal DEF Congestion Map Routed DEF Final DEF
QP CTGen QP Opt WRoute Pearl Testbed II: Metricized Cadence SLC Flow M E T R I C S DEF Placed DEF Incr. LEF GCF,TLF Clocked DEF Constraints Optimized DEF Routed DEF
Outline • Data collection process and potential benefits • METRICS system architecture • METRICS standards • Current implementation • Issues and conclusions
Conclusions • Current status • complete prototype of METRICS system with Oracle8i, Java Servlet, XML parser, and transmittal API library in C++ • METRICS wrapper for Cadence and Cadence-UCLA flows, front-end tools (Ambit BuildGates and NCSim) • easiest proof of value: via use of regression suites • Issues for METRICS constituencies to solve • security: proprietary and confidential information • standardization: flow, terminology, data management, etc. • social: “big brother”, collection of social metrics, etc. • Ongoing work with EDA, designer communities to identify tool metrics of interest • users: metrics needed for design process insight, optimization • vendors: implementation of the metrics requested, with standardized naming / semantics
Thank You http://vlsicad.cs.ucla.edu/GSRC/METRICS