The industry is excited about RISC-V, and rightly so. It is enabling companies to take back control of their software execution environment without having to assume the huge responsibilities that come along with processor development and support of an ecosystem for it. Maybe a company wants to use a commercially developed core today, get the software developed and the processor integrated and then in a future generation, replace that with their own core. Perhaps they envision a range of products where the processor is tuned for each product in the family. There are so many possibilities that were out of reach in the past.
Along with the core, the RISC-V Foundation is attempting to develop a compliance suite. When run on an implementation, it will find out if all instructions work according to the specification. This is only the first stage of verification because this does not tell you that your implementation is always correct, only that it works in the cases tested by the compliance suite.
Still, this is only a small fraction of the verification that needs to be performed. Back in my AMD days, my boss stated that in a microprocessor, about 20% of the logic is ISA, ALU, etc. 80% of it is load and store. It is all about caches and paging and fabric and memories. Open ISA or not, the storage access verification problem remains, especially when you get into bigger chips.
Many people are going after the challenge of full verification of an implemented core and that is an important effort for the industry. However, not as many are tackling the memory and key peripheral sub-system and this is a task that is highly suited to PSS test synthesis. Software Driven Verification (SDV) enables the processor to manipulate the cache and to quickly setup the conditions to fully test it.
In the past, this type of testing has been conducted by a group of engineers handwriting code to run on the processor. It takes a long time to write those directed tests, but it can be done. Here is a graph showing tests written by Carbon Design Systems, a company acquired by Arm in 2015.
It gets the job done, but it is a rather inefficient test and many such tests would be necessary to get good coverage. By modeling the memory sub-system for Arm, which is not that dissimilar to RISC-V, and having the synthesis engine generate tests, we see the graph produced below.
As you can see, the test density is much higher. This translates into much higher coverage and a lot less time spent in simulation or emulation, that saves money. How do we generate these tests? Breker has produced an app for this very purpose for Arm processors, and is now releasing one for RISC-V, meaning that it has probably done 90% or more of the work for you. The graph we ship is shown below. The “TrekApp” provides a complete test synthesis processor sub-system solution, configurable for any integration situation that does not require learning the Portable Stimulus Standard (PSS) language.
It is possible that your memory sub-system may not exactly match what we provide in the app, or you may have some capabilities that you have added. The PSS model can easily be updated to accommodate those changes. As it stands, this graph, if fully exercised, recognizes about 1.6 x 1058 possible test paths. Without automation, there is no hope that you will even manage to cover a very small fraction of those. With test synthesis, you can select the required coverage level and functional space, so you can plan how you want to achieve full coverage, something that has never been contemplated in verification in the past. By targeted virtual prototypes, simulation, emulation, rapid prototyping and even first silicon with the right number of tests, SoC coverage analysis becomes realistic and meaningful, with critical corner cases predicted and fully covered.
Of course, RISC-V adds the dimension of “rolling your own” additional instructions. Other configurable processors, such as Tensilica and ARC, with this capability have suffered from the verification problem where the entire processor sub-system must be retested with every new instruction. RISC-V faces this too and without automation, it will fall back to the processor developer to provide this service. A PSS app maybe extended in a modular fashion to add extended instruction testing to the original test suite, a huge benefit to both end-user and processor developers alike.
Now this is where things can start to get very interesting. If your processor is not exposed to the outside world and the only people writing software for it are your own internal development teams, you do not have to follow the normal verification paradigm. That means wasting a lot of time verifying things that you don’t care about. With PSS, you can take the graph that defines the whole memory sub-system, including the cache, and prune that graph using path constraints to restrict it to what you care about.
Let’s assume that you do not want some of the capabilities of the exception handling. No problem. Mark the graph and it will remove them from consideration when test synthesis happens. Maybe you don’t want to run all the interrupt testing capability contained in the app. No problem. Or maybe you only want the app configured for a level two cache system. All is possible without even modifying the graph.
I made some important points here. A PSS model can define the complete memory sub-system of an Arm or a RISC-V system, and Breker has both of those packaged up as apps. Using path constraints, it is easy to prune that graph to only verify the things that you care about. It is easy to add new capabilities to the model, a critical aspect given the RISC-V instruction extension capability. The test synthesis process will generate a full suite of highly efficient, coverage-driven tests that can fully utilize a range of platforms giving you options that have not been available in the past. We worked with partners to extend our memory sub-system models and we can work with you to meet your requirements. Together we can do this.