Using Cosimulation to Develop and Test Against Reference Systems
|Wednesday, December 28th 2022, 13:53||
-- Or: Lazy People’s Guide to Writing Test Benches
Embedded content has been disabled to protect your privacy.
or click here to allow them permanently,
or click here to watch the video on YouTube:
Please note that, by enabling this video, data is transferred
Writing good code is hard. Testing it is arguably even harder. And with the advance of modern technologies, the demand for good verification only keeps growing along with it. However, especially the FPGA world appears to commonly use obsolete and inappropriate verification tools and methodologies. But it doesn’t have to be this way! This post outlines, guided by a real-world example, how to use modern verification languages and tools to create high-quality test benches in less time.
The slides used in the video presentation can be downloaded here: odp , pdf 
Basic FPGA Development Flow
To understand the problem, we need to first give an overview of the way many of us currently work.
When developing FPGA designs, the basic development flow generally consists of the following steps:
1. Write code. Write the design to the best of your abilities. And here’s a friendly reminder, there will be bugs in it, no matter how hard you try.
2. Write a test bench. Write verification components and test scenarios. Verification components are the parts of test bench code that turn simple instructions such as
uart_send(data) into an actual stream of data. The test scenarios are sequences of calls to these verification components, for example a “send data on UART and observe that the IO pin goes high.”
3. Run the test bench. Run the prepared tests, make iterations to both, the code and the test bench, and finally reach a “fully working” design.
4. Compile and run on hardware. After building the design, program it into the FPGA and run test scenarios on the real board.
The Problem With Conventional Testing
There are a few problems with this workflow, however, and oftentimes, these are just accepted as the status quo and ignored in industry. Here’s a couple of the major ones:
First, writing a good test bench is hard. This itself has various causes. Many of the conventional languages, like VHDL or Verilog, are just not made for test bench code. There have been many improvements in the past few years starting with revisions such as VHDL-2008, but overall, these languages still struggle a lot when it comes to dynamic memory and complex data structures.
Second, writing a good test bench takes time. A design might have many interfaces. A design might have highly complex interfaces. Writing a verification component for each of these well requires a lot of time and therefore money.
Also, a lot of times, writing verification components means reimplementing systems that already exist, for example in software. Wouldn’t it be nice to reuse these?
Connect and Reuse
The solution to this scenario is to bridge the gap between VHDL and the rest of the world. This allows the use of existing implementations, which can either be proven-in-use / reference implementations, or even the actual target counterparts that are to be used in the project. Also, creating such a bridge allows the use of software programming languages to describe the test scenarios, greatly improving time efficiency and test quality.
One such framework is Cocotb, which allows to write the test bench in Python. That alone is already a great advantage, but Python’s rich library collection allows to easily interface other systems.
An example: Ethernet Protocols
A good example to illustrate the advantages of this method is an Internet Protocol (IP) application running on top of Ethernet.
To create a test bench for this, you’d have to implement all the different Ethernet layers, MAC, IP, ARP, TCP, and for example HTTP. What this method allows you to do is to skip all these steps and only implement the translation from software PHY layer telegrams to the PHY component in the VHDL simulation. So, in the end, it would for example be possible to just issue a
curl http://fpga/test and get a response back directly from the VHDL simulation.
In practice, a test bench in Python+Cocotb could make use of the following structures.
On test bench initialization, we bind to an Ethernet interface using the socket API:
Then, we create a loop to forward everything received on this interface to the design:
Similarly, each time the designs transmits a frame, we forward it to the real network interface:
Functionality for the design interface (
self.eth_tx) in this case represents a FIFO-style interface to the design. The implementation of this is out of the scope of this paper, but can be reviewed in the source code archive.
The code above used a real hardware Ethernet interface of the host (
eth0) as the tapped device. Depending on the type of test, this is often not desirable. If the network peer is supposed to be part of the local simulation and run on the same host, a virtual network can be created using Linux’s IP toolset:
This set of commands creates two network interfaces.
veth2, which is used to tap from the Python test bench, and
veth1, which is used by all other test components. A ping, for example, can be sent only to the FPGA simulation using the
A working example of this kind of test bench can be found in the Trashernet Git repository . This method allows the developer to quickly test their code against a real implementation of the FPGA counterpart – live.
A verification test bech, on the other hand, would now use scripts (written in pretty much any language) to control and check telegrams sent to and from the FPGA. It might even use the official testing libraries that are offered as part of for example the network stack.
The important thing is, though, that in neither of these use cases, the tester had to write a single line of code implementing the network stack. It could all be 100% reused.
This example shows the potential cosimulation has in modern verification. It can improve both time and cost efficiency without compromising, possibly even improving, test quality.