150 likes | 288 Views
Verification of Configurable Processor Cores. Marines Puig-Medina, Gulbin Ezer, Pavlos Konas Design Automation Conference, 2000 Page(s): 426~431 presenter: Peter 2000/11/06. What’s the problem?. The verification methodology for configurable processor cores.
E N D
Verification of Configurable Processor Cores Marines Puig-Medina, Gulbin Ezer, Pavlos Konas Design Automation Conference, 2000 Page(s): 426~431 presenter: Peter 2000/11/06
What’s the problem? • The verification methodology for configurable processor cores. • Simulation-based approach uses directed diagnostics and pseudo-random program generators. • A configurable and extensible test-bench for SOC verification. • Coverage analysis provided.
Introduction • The processor core should contain only the necessary functionality (defining and incorporating new instructions) so that it consumes little power,small area,high performance. (Tensilica) • A robust and flexible methodology for the verification of the processor (for architectural and micro-architectural testing)
Configurable processor • Xtensa: enable configurability, minimize code size, reduce power, and maximize performance. • The processor generator include: RTL code and a test-bench; a C compiler, an assembler, a linker, a debugger, a code profiler, an ISS.
Test program generation • Using the “Perl” scripts (an OO based verification language) • AVP(architectural verification program): testing the execution of each instruction in the ISA. • MVP(micro-architectural): testing features of the Xtensa implementation. • Random test program.
Co-simulation(1) • The comparison process is implemented in Vera-VHL (from Synopsys Inc.) • There are three major advantages: • allows fine-grain checking through processor states during simulation. • Constructing a comprehensive self-checking diagnostic is considerably. • Stop the simulation at, or near, the cycle where the problem appears.
Co-simulation(2) • The biggest challenges: finding the appropriate synchronization points between models at different levels. • In Xtensa, the interrupt latency can’t be reproduced by ISS model. • Masking off comparisons when the processor state is architecturally undefined.
Coverage • Employing ISS monitors (written in Perl ) that check the architectural level coverage. • Using Vera monitors to check RTL state and micro-architectural features. • Using “HDLScore” (a program-based coverage tool), Vera FSM monitors.
Proc1: only part of the available option. Proc2: represents a maximum configuration. Proc3: a randomly generated configuration. The examples(1)
Conclusion(1) • Present methodology for generating AVP and MVP(Perl script) • Outline the coverage analysis methodology.(based on Vera) • The author is working on expanding the coverage analysis framework and the random diagnostic test-program generator.
Conclusion(2) • Measuring coverage is only useful if the results of the analysis are conveyed back to the verification and design teams and they are used to improve the verification process. • The coverage tool: Perl, Vera (Synopsys), Verification Navigator( TransEDA)