440 likes | 677 Views
Architecting Testbenches. Architecting Testbenches. Reusable Verification Components Verilog Implementation VHDL Implementation Autonomous Generation and Monitoring Input and Output Paths Verifying Configurable Designs. Reusable Verification Components.
E N D
Architecting Testbenches • Reusable Verification Components • Verilog Implementation • VHDL Implementation • Autonomous Generation and Monitoring • Input and Output Paths • Verifying Configurable Designs
Reusable Verification Components • Want to maximize amount of verification code reused across testbenches • This minimizes the development effort • Testbench is made up of 2 components • Reusable test harness • Testcase specific code
Procedural Interfaces for reusable components • For the components to be reusable, must define procedural interfaces that are independent of the detailed implementation • This way as the interfaces change, the testcases do not. • Provide flexibility through thin layers. • This gives the ability to have fine control or greater abstraction (byte access vs.. packet access) • Do not try to implement all functionality at one level • Preserve the procedural interfaces
Development Process for reusable components • Introduce flexibility as needed. • This way you don’t spend time on unnecessary functions • Use the verification plan to determine what is functions are necessary • When you need more complex functions, build upon already verified functions.
VHDL Implementation • Want to refine the monolithic view into a client/server BFM, access and utility functions, and testcases. • Goal is to obtain a flexible implementation strategy promoting reusability of verification components • To change a testcase, just change the control block.
VHDL Implementation – Packaging Bus Functional Procedures • BFM’s can be located in a package to be re-used. • Driven/monitored signals are passed as signal-class arguments • BFM procedure’s are cumbersome to use. • All signals involved in the transaction must be passed to the procedure • Duplication across testbenches, each testbench: • Declares all interface signals • Instantiates the DUV • Properly connects everything
VHDL Implementation – Creating a test harness • Instead – add some abstraction, use a test harness. • In contains declarations and functionality common to all testbenches • Reduces amount of duplicated information • Determine common elements • Declaration of the component • Declaration of the interface signals • Instantiation of the DUV • Mapping of Interface signals to the ports on the design • Mapping of interface signals to the signal class arguments of the bus-functional procedures
VHDL Implementation – Creating a test harness – (cont) • Use an intermediate level of hierarchy for test harness encapsulation • Have the test harness own the procedures • All the signals associated with the DUV are local. • Testcase control process instructs the local process • Use control signals to communicate to/from the testcase to the test harness
VHDL Implementation – Creating a test harness – (cont) • Control signals: • Server signals – signals used in the local (test harness) process • Client signals – signals used in the testcase • These signals have to be visible to both the client and server (these are located in different architectures) • Pass as ports in the test harness • Global signals in a package
VHDL Implementation – Creating a test harness – (cont) • Global signals eliminate need for each testbench architecture to declare them • Use ‘transaction to synchronize operations • Won’t miss two consecutive operations (compared to using ‘event) • Control signals utilize records • Minimize maintenance if signals change • Minimizes impact on client/server processes
VHDL Implementation – Abstracting the Client/Server Protocol • Client must properly operate the control signals • Must wait for the server process to finish the transaction • Use transactions, not event based waits • Defining the communication protocol on signals between the client/server don’t seem to accomplish much. • Now instead of having to deal with a physical interface, you have an arbitrary interface • Encapsulate this for ease of maintenance • Removes the client having to know the details of the protocol • Change the protocol without having to change testcase
VHDL Implementation – Abstracting the Client/Server Protocol (cont) • Encapsulate by placing server procedures into a package • Client processes are now free from knowing the details of the protocol between the client and server • To perform an operation, use the appropriate procedure • Pass the control signals in the procedure • Use the package to get access to the global signals and procedures • Simply need 2 signals (records) • One used in communications to the server process • One used in communications from the server process
VHDL Implementation – Abstracting the Client/Server Protocol (cont) • What has been gained – signals are still passed from the bus-functional access procedures • Testcase need not declare all the interface signals to the DUV • Testcase need not instantiate and connect the design • Reduction of duplication • Regardless of number of signals involved in physical interface, only 2 need to be passed • Testcase is completely removed from the physical interface of the DUV • Pins can be removed or added, polarities changed, etc – no effect on existing testcases
VHDL Implementation – Managing Control Signals • Use two control signals (simplest method) • One to send control and synchronization information to the server • One to send control and synchronization information to the client • Use one control signal – requires additional development effort • Must have a resolution function for this method • Include a mechanism for differentiating value driven from client and server • Proper interlocking for parallel procedure calls • May have dozens of server processes • Each has its own packages/procedures • Use qualified names to prevent identifier collision
VHDL Implementation – Multiple Server Instances • Many designs interface to multiple instances • If using one BFM instantiated multiple times, need a way to differentiate which instance to use to perform an operation • Use an array of control signals • One pair for each server • Can use a generate statement to instantiate the BFM’s
VHDL Implementation – Utility Packages • Utility routines can provide additional level of abstraction • Composed of procedures • Encapsulated in separate packages using lower-level access packages • Example • Use the read/write access functions to perform packet read/write access functions
Autonomous Generation and Monitoring • You now have a properly packaged BFM • These can become active entities • Removes tedious task of generating background/random data • Removes task of performing detailed response checking • Can include variety of tasks • Safety checks • Data generation • Data collection • Etc • Constant attention from testcase not required • Instead, testcase just configures BFM
Autonomous Stimulus • Protocols may require more data than is relevant for the testcase • ATM protocol – when not active cell, must send “dummy” cell • A procedure (send_cell) would inject the cell, but what about the other cycles. • Don’t want the testcase to worry about it • Add Autonomous stimulus to BFM • Now send_cell would inject the cell at the appropriate time (after current dummy cell is sent) and then return back to the testcase
Autonomous Stimulus (Cont) • For Autonomous stimulus BFM’s, 2 type of procedures: • Blocking – procedure does not return control back to testcase until BFM is done with function • Non-blocking – procedure returns control back to testcase immediately • For this, actions must be queued. • Used to control constraints on the fly
Random Stimulus • Generated data can be random • BFM can contain algorithms to generate random data, in a random sequence, at random intervals • May want controls (procedures) to BFM to change random constraints • Given enough information, the Autonomous BFM can assist in verifying the output of the DUV • Example of packet router – header and filler info could be randomly generated, but expect is a function of this. BFM has knowledge, testcase doesn’t. Use BFM to generate the expect. • Procedures used to start/stop the BFM and fill in the routing table.
Injecting Errors • How do you ensure that your monitors are working? • Could also have BFM to be configured to be enable to generate error conditions to ensure they are detected. • Example – parity generation • Example – CRC generation
Autonomous Monitoring • We have differentiated between monitor and generator based on who is in control of timing (who initiates) • Monitors must be continuously listening • If not, outputs can be missed • If procedures are used to activate monitoring, potential for missed activities • Must have 0-delays for back to back calls.
Autonomous Monitoring (Cont) • Role of testbench is to detect as many errors as possible! • Even errors from within the testbench (Usage errors) • If unexpected output operation is not caught, then possible for bad design to go unnoticed • Solution – Autonomous Monitors • Continuously calling monitors is cumbersome – Use autonomous one • Continuously monitors outputs – all the time • Use queues if needed • Testcase can still use procedure, BFM just dequeues from its queues • Can have error generated if there are still items in queue at end of test
Autonomous Error Detection • Go one step further and place all checking in Autonomous Monitor • Used with environments where data contains description of expected response • Could also have stimulus BFM give the monitor the expected response • Procedures could pass configuration options
Input and Output Paths • Each testcase must provide different stimulus and expect different responses • Create these differences by configuring test harness in various ways, and providing different data sequences • To this point we have assumed that data was hard coded in the testcase
Programmable Testbenches • Most cases, stimulus data and expected response are specified in verification plan and hard coded in testcase • Use BFM, in test harness, to apply/receive data using procedures from testcase. • Alternative is to have programmable environment • Have stimulus/expect read from a file • Do not have to recompile for new testcases • Common method where you have external ‘C’ program generating data and expect (reference model)
Programmable Testbenches (Cont) • VHDL • Can read any text file, but very primitive • Verilog • Only read binary/hex values • May want to “massage” data before placing into these files
Configurable Files • Packaged BFM’s offer the possibility of making testbench difficult to understand or manage • To minimize possibility, use self-contained BFM’s, controlled by procedures. • Only exception may be seed for randomization • Functionality of a testcase originates from top-level testcase control description. Non-default configuration settings originate from that point. • Avoid external configuration files • Creates more complicated management task • Task grows exponentially with # of files
Configurable Files (Cont) • If you must use files, then: • Do NOT hardcode filenames inside BFM or procedure, instead place the name as a parameter to procedure • Not obvious where data is coming from • Used in procedure, then top-level testcase shows what file • Talk more about output file management this later!
Concurrent Simulation • Other problem with hard coding filenames, concurrent simulation • Causes collisions between multiple simulations • Try to make filenames unique
Compile Time Configuration • Avoid using Compile time configurations for BFM. • Different configuration requires compiling the source code with a different header file • Makes managing a testcase more complicated – extra files to maintain • May make it impossible to run concurrent compiled simulations • Verilog always compiles before simulation, may make it tempting to use. • Minimize # of files. Use a single compile-time configuration
Verifying Configurable Designs • If different configurations must be verified, architect you environment for this • Two type of configurable designs • Soft – Can be changed by a testcase. • Randomization constraints • This type is implicitly covered by the verification process • Hard – Cannot be changed once simulation starts. • Clock frequency • Testbench must be properly designed to support this in a reproducible fashion
Configurable Testbenches • Configure testbench to match DUV • If a design can be made configurable, so can a testbench • Use generics/parameters to achieve this • This allows configuration details to be propagated down the hierarchy from the top-level • Using this approach, almost anything can be configured • Can be used in signal definitions, generate statements, etc • If a component is configurable via generics or parameters, make this part of its interface
Top Level Generics and Parameters • Top-level modules and entities can have generics and parameters • Each configurable component has these • May want control from top level • Top level does not have pins or ports, but it can have generics/parameters • Can control configurations based on these values • Some simulators allow setting these from command line • Not portable among simulators • Can wrap the top level with another layer of hierarchy • Verilog can use “defparam” statement • VHDL can use configurations to do this
Random Environments • “Random” used to describe many environments • Some teams call testcase generators “random” – they have randomness in the generation process, these are one type • Two types: • Pre-test randomization • During Test Randomization • Both utilize autonomous stimulus and autonomous checking
Pre-Test Randomization • This is a technique used to randomize the controls to the BFM’s • Introduce another layer of hierarchy to the verification environment: • Common testcase • Example: DUV has 4 major functions, could do pre-test randomness to randomly pick which of the 4 major functions. • Input to common testcase could be how many to test.
During-Test Randomization • This utilizing the methods of autonomous stimulus and monitoring. • For a given interface, done entirely from within a BFM • Controlled via procedures from the testcase and/or common testcase.
Random Drivers/Checkers • The most robust random environments use autonomous stimulus and autonomous monitoring • Autonomous stimulus: • Allows more flexibility and more control • Allows the capability to stress the DUV • Autonomous monitoring will flag interim errors. Testcase can be stopped as soon as a random input sequence causes an error. • Overall quality is determined by verification engineer. • Scenarios missing from verification plan • Checks not implemented correctly or missing
Random Drivers/Checkers • Costs of optimal random environment • Code intensive • Need experienced engineer to oversee effort to ensure quality • Benefits of optimal random environment • More stress on the logic than any other environment, can include real hardware • Find nearly all the bugs • Most devious • Easy ones
Random Drivers/Checkers • Too much randomness can prevent from uncovering problems. • Want controlled and constrained randomness • May want “un-randomizing” controls to be built in • Hangs due to looping • Low activity scenarios • Directed tests • “Micro-modes” may be put in to the drivers • Allows user to specify specific scenarios • Allows the narrowing of scope for testing
Randomization Seeds • Testcase seed is used to seed the pseudo-number generator. • The pseudo-number generator is then used for all decision making in the BFM’s. • This is chosen at the beginning of the test. • Usually passed in as a parameter or generic • Caution: Watch for seed synchronization across drivers, especially ones instantiated multiple times.