Let me start by laying the cards on the table – the Portable Stimulus Standard (PSS) is a language, not a methodology. Tools are not methodologies. Languages ensure a well-ordered transfer of information from which tools can be constructed. A methodology is a way of systematically breaking down and solving a problem in a manageable manner. Tools can enable methodologies, and, over time, tools may help to manage a methodology once it has become standardized. No standard methodologies exist today for PSS, neither are the capabilities of tools defined by the language.
Portable Stimulus is a language that provides some degree of commonality between vendors in graph-based verification technologies that are delivered in the form of tools. Those tools can be used within existing methodologies or enable new ones to be created that may not have been supported by previous tools. When vendors provide capabilities beyond those defined in the standard, it is for the user community to decide how useful they are. The good ones will get wrapped into a future version of the standard, the less useful ones will be ignored. This is how languages evolve, especially for a language defined before a de facto standard emerged.
As an example, the previous generation of verification solutions relied upon functional coverage to ascertain the worth of any particular testcase. Implementation coverage as measured by RTL functional coverage is used as a proxy for verification intent coverage. It looks at values seen in the design while running tests and equates that with intended behavior being executed. Verification engineers have a hard time creating good functional coverage models. They have even more difficulty modifying constraints to fill a coverage hole.
Portable Stimulus tools capture the intended behavior of a design. Tools target intent coverage and know exactly what design intent should be covered by a particular testcase. It may appear that functional coverage provides no additional information.
This is not true. What if the PSS model is missing part of the intended behavior? Intent coverage based on the graph will only convey how complete the test-generation tool believes the tests are. Conversely, what if the RTL is missing part of the required implementation? Implementation overage on RTL will not find missing functionality. Cross correlation between intent coverage and RTL functional coverage offers the best approach to catch anything that has been missed.
It is this duplicity that is at the heart of design and verification. It takes two independent models, which are systematically compared to find differences which may imply bugs in the design, in the testbench or in the specification. The question is: how important will users of PSS find the old notions of RTL functional coverage compared to functional coverage provided within PSS? Function coverage can only be measured in simulation. Unit-level simulations are fast, but cannot test system behavior. System testbenches are slow and cannot run many tests.
Should intent coverage be correlated against functional coverage in simulation to gain confidence in the PSS model, and then emulation, FPGA and post-silicon applied to properly cover the remainder of the large intent coverage space? That is for users to decide. Continuing to use existing functional coverage mechanisms does provide a familiar mechanism for users migrating from one solution to the other. Users may find it useful, especially in the beginning, or they may decide that the costs exceed the value. Certainly, functional coverage is seen as a huge time and energy draw today.
Methodologies evolve over time and tools are created that support those methodologies –– to help provide automation and tracking, for example. Existing verification managers act as a central cockpit where results and progress can be stored, and new verification campaigns launched. When constrained random was first defined, there were no verification managers and did not start to appear until it best practices became clear. It would be wrong to conclude that those managers are the right thing to use for PSS-based solutions. After all, users have not yet had a chance to determine how they want to use the solutions that exist and the changes they want to see in them.
At Breker, we encourage users to develop methodologies that optimize their time and resources. We provide capabilities that we believe will be useful in those endeavors, and expect some will see greater adoption than others. For example, TrekSoC can generate complete tests ahead of time, or it can be reactive. It can generate code intended to run on embedded processors or utilize transactional communication with the DUT, or both. Each enables different methodologies.
We have been, and will continue to be, responsive to user’s needs. It may be the extension of existing methodologies, such as those crafted around SystemVerilog and UVM, or the creation of new methodologies that until now relied on manual efforts without any form of automation or tracking.
As a vendor, we see what multiple users are attempting to do and can improve the support for those functions. This is how methodologies evolve and grow. Together we can do this.