110 likes | 250 Views
Workshop Summary. FMCAD 2006. Discussion Items. Benchmarks HW vs SW verif differences (to help sharpen our agenda) use of abstractions in HW verif explicit vs implicit impact on ability to dial-down (like Roope said)
E N D
Workshop Summary FMCAD 2006
Discussion Items • Benchmarks • HW vs SW verif differences (to help sharpen our agenda) • use of abstractions in HW verif • explicit vs implicit • impact on ability to dial-down (like Roope said) • Whether we have to have specialists who will focus on diff func block types • HW verif is by no means a solved problem!
Benchmarks • What is a good set • not too big • well documented • encourage papers on them? • Who has what? • Ganesh: • Hier. cache coh protocols (2 of them, Murphi) • (Soon): VHDL-level models of the German protocol • Sudarshan Srinivasan’s • CPU benchmarks: what do we do with these benchmarks • Identify verification issues • Properties end to end verified is not the important thrust • End-to-end verification experience of engineers • With and without bugs • Inject high-quality bugs • Inject industrial-like “ugliness” into them
Benchmarks • Who has what? • TRIPS benchmark from UT (multicore benchmark) • VAMP processor • Unit-by-unit verification • PVS theorems around each unit are available • Isabelle theorems also available • Jun’s 9801 verification
Benchmarks(a) • Power / clock-gating, • Take non-triv complex designs out there • What types? • Open-cores? • Take IBM/Intel/Other experts • Take their help to massage and make them “real enough” • Put it out there for FV guys to attack • Have tool competitions • What tool can solve the problem at all.. not performance (CJacobi) • Methodology will be interesting to study (JBaum.) • Reward for working on benchmarks • Recognize thru papers accepted at FMCAD
Benchmarks(b) • John O: • SRC to fund benchmarks since it’s such an important issue • Verif methodology is THE issue. Helps turn “art” into Engineering • Pete: • NSF: can it work towards • Benchmark development – can it be funded at 200K/2y
Benchmarks(b) • John O: • SRC to fund benchmarks since it’s such an important issue • Verif methodology is THE issue. Helps turn “art” into Engineering • Pete: • NSF: can it work towards • Benchmark development – can it be funded at 200K/2y
The community interested in processor-like problems • Clarke’s • Pete’s • Mark Aagaard’s • Ganesh’s • Karem • W Hunt’s • J Moore’s • Wolfgang • Arvind – Joe Stoy, Nikhil, … • Germany – 7 or 8 groups + Europe + UK • W. Kunz • Gordon • Hans Eveking • Tom Melham • Sheeran
Topics of interest (and could be represented in benchmarks) • Verification of microarchitecture • How do we describe it? • SystemC is the present choice in many works… • BSV? • Synchronous Murphi (Transaction level…) under development… • Common characteristic: Guarded Command notations, it seems • Link with RTL • Links between functional and peformance models
Topics of interest (contd..) • Microcode verification • How to approach • Microarch + microcode verif • Invite Eli Singerman to talk abt it? • (aside: Sixthsense has modern reduction techniques like Interpolation, BDDs rendered usable...) • (aside: Rulebase experience also similar, callable) • (STE: so far BDDs + elbow grease + TP…)
Other topics of interest • Post-Si verif : how can formal models help? • Trace array: how to • break and snapshot • backward bounded model-checking