Both UVM and PSS solutions deploy constraint solvers to create test cases, but that is where similarities end. Thankfully, they can work together to make everyone’s life a little easier.
Note from Leigh Brady: Thus far, our “Inside Portable Stimulus” series of columns (see “Filling in the Blanks,” “Concurrency and Schedules,” and “The Exec Block”) has concentrated on fundamental concepts that surround the new Portable Stimulus Standard (PSS) language. While there are some verification tasks that will exclusively use PSS, most verification engineers come from a SystemVerilog and UVM background. I would like to introduce one of my colleagues, Aileen Honess, an FAE at Breker. Aileen has 20 years of experience teaching, mentoring, and leading hardware verification projects across a variety of disciplines, companies, and continents. She is an expert in UVM, and I know of no one better to explain some of the differences between UVM and PSS. I will be back in future articles to explain exactly how to connect these two environments.
This column looks at differences and similarities between the previous generation of solutions that drove the verification industry toward constrained random test pattern generation and the new generation of solutions driven by PSS. Both solutions deploy constraint solvers to create test cases, but that is where similarities end. Thankfully, they can work together to make everyone’s life a little easier.
To avoid confusion, we will refer to the previous solutions based on SystemVerilog and UVM as UVM solutions, while those based on Portable Stimulus are referred to as PSS solutions. The constraint solver for UVM deals only with combinatorial constraints. If the value on this wire or register at time T is X, then the value on this other wire or register must be Y. At time T+1, everything is solved again but with new random values. PSS solutions add temporal constraints, which means that if a function called A happened at time T, then only functions B or C can happen at time T+1. This is in addition to combinatorial constraints.
The implications of this are fundamental. PSS understands the control and data flow of the design. If you want, for example, a test that displays an image on a screen, that is the test case that a PSS tool should generate. You may ask for tests that exercise all potential paths to do that, such as the image coming from a camera, from a memory card, or streamed through a wireless connection. These are fairly simple test cases for PSS. You may want to layer on top of that, performing one of those tests while receiving a text. This will result in a graphic overlay being created or any other concurrent activity that may disturb the primary objective.
These concurrent tests are usually performed at the SoC level, at which there are typically one or more embedded processors. A fundamental difference between UVM and PSS testing is that UVM requires processors to be removed, whereas PSS can generate software to run on them. This enables a PSS tool to run the test from the inside out. This software, generated by the PSS tool, is not production software, although it can use some if required. SoC tests are not testing that the bus protocol is implemented correctly — they are focused on finding out if the right connections are in place, that data paths work correctly, and that functions do not interfere with each other.
Verifying bus protocols and the functionality of primitive blocks is much better done in isolation, so in this case, the removal of the processors and exposing the bus interfaces makes a lot of sense. Full control of everything is necessary for exhaustive simulation. The verification of these blocks has been the target of UVM, which often remains the best choice for addressing these tasks.
Along with the definition of UVM came a modeling paradigm for creating transactor models. Their role is to raise the level of abstraction from hardware signals up to the transaction level. This is the level at which the UVM constraint solver operates. It also corresponds to the abstraction of most of the primary inputs and outputs of an SoC, and PSS solutions need them as well. Even if the primary test runs on the embedded processors, it still must provide data to the primary inputs and collect data from the primary outputs.
The bridge between the two solutions is important. PSS solutions replace much of the UVM constraint solver, and that will improve the quality of the tests created. It will also increase the complexity of tests that can be generated. In addition, the PSS model is independent of the transactors, and it becomes much easier to generate tests that require coordination across multiple transactors. All of these result in shorter runtimes, which is especially important when valuable emulation resources are being consumed.
There is another less obvious impact. Part 2 of this series discussed schedules. An example schedule is shown in Figure 1. Debug accounts for about 50% of the total time spent by the entire development team. Part of the reason for this is because, when a test fails, you first must understand what the test was doing. This is not obvious from a test created by a UVM solver and was the reason why functional coverage had to be created.
UVM blindly constrains inputs and squeezes constraints to aim at the things that are useful. Functional coverage metrics are required to indicate if a test was able to reach the intended goal. However, simply knowing the coverage does not tell you much about what the test was doing, what caused those things to happen, or what the outcome should have been. All of this is provided by a PSS tool.
PSS starts at the goal and works backwards. Each step along the way is more procedural and, thus, requires many small randomizations. Constraint solving is faster in PSS, it’s guaranteed to reach the goal, and — most important — the intent coverage is known before the test is run!
The PSS test cases, as previously discussed, know about complete paths through the design and what is scheduled to run concurrently. PSS tests are self-checking, so it becomes possible to immediately see where a test starts to fail. As a result, the debug effort knows exactly where to start rather than working back from a checker.
While this article is not as technical as the previous ones in this series, it was necessary to establish the concepts behind integrating UVM and PSS. In our next column, we will delve into more details of how this can be done.
As always, please reach out to us — me (Aileen Honess) or Leigh Brady — with any comments, questions, or requests for clarification.