I found myself staying in a French farmhouse recently. While we devoured delicious pâté, baguettes and fromage, all swilled down with a bottle of delicious St. Emilion, a strange little beast appeared.
The mite flew in from the chimney and landed on my arm. Resting just long enough to flutter a wing, it then descended onto the pâté with the resulting and inevitable cry of alarm from the lady of the house.
I have never seen a bug quite like this one. Sort of moth-like, though larger, with scaled wings, legs that looked like talons, vicious red eyes, and I assumed some lengthy fangs, at that moment deeply inserted in its pâté prey.
The bug was waved at, but of course, it was not so easily deterred, talons digging into its pâté perch. All of a sudden, it disappeared, only to reappear a minute later on the cheese. It was summarily dismissed and soon rediscovered underneath the bread. Then it buzzed around angrily, evading all attempts to trap it, apparently leaving through an opened window. This bug had a unique, French-like quality. Not to be deterred, it was back, this time swimming in the wine. One of those nasty bugs that reappears in some more alarming fashion just when it appears to be dealt with.
A different approach was needed to both detect and eliminate it. We stood back and surveyed the whole scene. With our new bug’s perspective of the entire landscape, we assessed likely landing points, compared and contrasted possible bug flight paths and landing vectors, and set ourselves up to spot it. Sure enough, within two minutes, it was residing on the end of a fly swatter.
Why the rambling French bug story? Today, we build tests for simulation by thinking about how we can stimulate parts of the design to try out individual items of functionality. We look at the achieved functional coverage post-simulation execution and debug sometimes complex issues using a tool resembling a 30-year-old logic analyzer. This works to a certain extent on smaller blocks driven with universal verification methodology (UVM) testbenches but becomes problematic on larger UVM blocks and subsystems and will completely fail at the system-on-chip (SoC) level.
At the SoC level, a bug in one particular block may result in an incorrect value in the main memory, with a knock-on effect in another block. This in turn might send an incorrect value to a register that could appear as a software issue, a little like our farmhouse bug reappearing in unexpected locations. By stimulating and checking individual areas of functionality, it will be hard to find such a bug unless all functionality is covered.
How do we check if they are covered? By using coverage models based again on individual functional scenarios. This is the UVM way of thinking, which has its place but is lacking at the SoC level.
Another method commonly used at the SoC level is to run real data through the device, such as a real OFDMA stream being played into the input of a complete 5G baseband. This may well find a few bugs, but it is unlikely to cover every corner case in a design, some that might only occur after months of real operation.
Particularly for SoC verification, it’s time to pull back and take a global view of device functionality. We need to start with the overall specification for the device to perform and generate tests based on this, exploring every corner case facet of this functional intent. Only by taking this approach will we rapidly track down tougher bugs. We are in the realm of the executable specification where the intent for an SoC is used to derive a broad range of tests that target every nasty corner case and take a holistic view of the operational scenarios.
We have used design synthesis for years now. Separating functional intent from optimization to target specific silicon implementations allows engineers to sub-divide issues and concentrate on key areas in a phased approach. The same is now available for verification in the form of test suite synthesis. A handy method to specify intent is also available through the portable stimulus standard.
Much like our overall view of the farmhouse, we can now take a global view of design intent and use this to produce tests based on the coverage requirement for the design. Once we have those tests derived with all random decisions made, we can move on to optimizing them for blocks, SoCs, and other execution engines. Tests based on this global view are fast and efficient and will target locations that might hide a persistent little bug all at one time, ensuring it gets trapped and eliminated.
Why crash around the French farmhouse chasing down a shifting, semi-visible, clever little bug when it’s easier to stand back, assess the overall scenario, and catch the little mite at its own game? Same for semiconductor bug hunting! With some blatant plagiarizing from a notable Joe Costello keynote — think like a bug!