There is no known way to guarantee that a system is secure. Vulnerabilities exist in hardware, software, and throughout the supply chain. They may exist by accident or by ignorance. They may have been inserted maliciously, or they may utilize some mechanism never before considered part of the attack plane. There is no one method by which all these potential issues can be addressed, and no tool that can find them all.
Security has often been likened to a castle. You build layers of protection and while you expect some to be breached, you make it increasingly difficult to get the ultimate prize. Verification has a role to play in exposing vulnerabilities, but the problem is that entrenched dynamic verification methodologies have difficulties with this. Technologies are needed that can produce testcases which demonstrate a weakness. Formal verification is one that has been successfully used for this.
https://brekersystems.com/wp-content/uploads/2018/02/breker-logo4.png00Adnan Hamidhttps://brekersystems.com/wp-content/uploads/2018/02/breker-logo4.pngAdnan Hamid2022-01-17 10:00:042022-02-22 00:25:56Security: Making the Unknown, Known
This blog series has stuck to what is in the Accellera Portable Stimulus 1.0 standard (PSS), but in this particular blog, we will deviate a bit. We will discuss a capability that did not make it into the first release of the standard, the Hardware Software Interface (HSI). It is a critical capability that now has the full attention of the Accellera Portable Stimulus Working Group (PSWG). Its absence results in extra work for companies that want to adopt Portable Stimulus tools without some form of this functionality.
The problem is easiest to understand by thinking about test portability. By that, we mean the ability to go from a single description of test intent and to execute that test, without modification, on a variety of execution engines. Those execution engines include simulators, running at either the transaction level or register transfer level (RTL), emulators, prototyping solutions, virtual platforms, and real silicon. Now, consider a test that needs to get data into a certain register or memory location or retrieve the contents of that register or memory to ensure that a test operated correctly.
It has been said that there are more than 100 companies currently developing custom hardware engines that can accelerate the machine learning (ML) function. Some target the data center where huge amounts of algorithm development and training are being performed. Power consumption has become one of the largest cost components of training, often utilizing large numbers of high-end GPUs and FPGAs to perform the task today. It is hoped that dedicated accelerators will be able to both speed up this function and perform it using a fraction of the power. Algorithms and networks are evolving so rapidly that these devices must retain maximum flexibility.
Other accelerators focus on the inference problem that runs input sets through a trained network to produce a classification. Most are deployed in the field where power, performance and accuracy are being optimized. Many are designed for a particular class of problem, such as audio or vision, and being targeted at segments including consumer, automotive or the IoT. Each restricts the flexibility that is necessary. Flexibility becomes a design optimization choice – the more that is fixed in hardware means greater performance or lower power but the software side is less amenable to change.
I found myself staying in a French farmhouse recently. While we devoured delicious pâté, baguettes and fromage, all swilled down with a bottle of delicious St. Emilion, a strange little beast appeared.
The mite flew in from the chimney and landed on my arm. Resting just long enough to flutter a wing, it then descended onto the pâté with the resulting and inevitable cry of alarm from the lady of the house.
I have never seen a bug quite like this one. Sort of moth-like, though larger, with scaled wings, legs that looked like talons, vicious red eyes, and I assumed some lengthy fangs, at that moment deeply inserted in its pâté prey.
https://brekersystems.com/wp-content/uploads/2018/02/breker-logo4.png00Dave Kelfhttps://brekersystems.com/wp-content/uploads/2018/02/breker-logo4.pngDave Kelf2021-07-10 09:00:272022-02-22 00:29:55Think Like a (French Farmhouse) Bug
If verification were as hot a topic as artificial intelligence (AI), we would be measuring things like effective verification cycles per watt. Unfortunately, the only things that ever seem to be measured in the verification world are the engines on which simulations are performed. We never actually measure the effectiveness of verification itself, probably because there was, for many years, only one methodology in the industry –– constrained random stimulus generation.
Sure, there were competing libraries to help implement this methodology, but most of the time, the jockeying of those was more political than technical. All of them provided enough gain over the then incumbent methodology of directed testing. Everyone was happy.
Nobody puts a verification job onto an emulator without first making sure that the testcase is efficient and will make good use of such a valuable resource. In the last blog, we showed how to go from a Portable Stimulus Model and, using testbench synthesis technology, migrate that test from a transactional universal verification methodology (UVM) environment into one that generated code running on the embedded processors of a design. These processors can be instantiated into the emulator –– wait what? This is where you hear the needle being scratched across the record.
The industry is excited about RISC-V, and rightly so. It is enabling companies to take back control of their software execution environment without having to assume the huge responsibilities that come along with processor development and support of an ecosystem for it. Maybe a company wants to use a commercially developed core today, get the software developed and the processor integrated and then in a future generation, replace that with their own core. Perhaps they envision a range of products where the processor is tuned for each product in the family. There are so many possibilities that were out of reach in the past.
Along with the core, the RISC-V Foundation is attempting to develop a compliance suite. When run on an implementation, it will find out if all instructions work according to the specification. This is only the first stage of verification because this does not tell you that your implementation is always correct, only that it works in the cases tested by the compliance suite.
https://brekersystems.com/wp-content/uploads/2018/02/breker-logo4.png00Adnan Hamidhttps://brekersystems.com/wp-content/uploads/2018/02/breker-logo4.pngAdnan Hamid2019-11-14 15:02:032020-02-06 02:25:03PSS and RISC-V – A Match Made In Verification
So far in this blog series, we have talked about some of the fundamentals of the Accellera Portable Test and Stimulus Standard (PSS) and how it can enhance a universal verification methodology (UVM) flow. This is a highly effective strategy for block-level verification and allows for reuse of existing Verification IP (VIP) models. However, as soon as a few design IP blocks are integrated together, it is almost certain that one or more processors will become a part of the subsystem. As soon as that happens, a new verification strategy is called for.
It is unfortunate that design and verification methodologies have often been out of sync with each other, and increasingly so over the past 20 years. The design methodology change that caused one particular divergence was the introduction of design Intellectual Property (IP). IP meant that systems were no longer designed and built in a pseudo top-down manner, but contemplated at a higher level and constructed in a bottom up, ‘lego-like’ manner by choosing appropriate blocks that could implement the necessary functions.
In many ways, this construction approach also led us down the path of the SoC architectures with an array of processing elements tied together with busses and memory we see today. And, no surprise to anyone because it mimicked the architecture of discrete compute systems with everything moving onto a single piece of silicon. This approach also provided a natural interface and encapsulation for many IP blocks.
It seems like ancient history now, but in the not so distant past, verification was performed by one tool – simulation; at one point in the flow – completion of RTL; using one language and methodology – SystemVerilog and UVM. That changed when designs continued to get larger and simulators stopped getting fast enough. Additional help became necessary in the form of emulators and formal verification, but that coincided with an increasingly difficult task of creating a stable testbench. It was no longer possible to migrate a design from a simulator to an emulator without doing a considerable amount of work on the testbench.
The increasing size and complexity of the design also made it necessary to think about verification as a hierarchy. You could no longer fit all of the design into a simulator and even if you could, it would be highly wasteful. It would make it too difficult and time consuming to get the levels of controllability and observability necessary for complete verification. Unfortunately, when a testbench is developed for a block, it cannot be fully reused when that block is integrated into a larger sub-system without significant re-work.
Security: Making the Unknown, Known
/by Adnan HamidThere is no known way to guarantee that a system is secure. Vulnerabilities exist in hardware, software, and throughout the supply chain. They may exist by accident or by ignorance. They may have been inserted maliciously, or they may utilize some mechanism never before considered part of the attack plane. There is no one method by which all these potential issues can be addressed, and no tool that can find them all.
Security has often been likened to a castle. You build layers of protection and while you expect some to be breached, you make it increasingly difficult to get the ultimate prize. Verification has a role to play in exposing vulnerabilities, but the problem is that entrenched dynamic verification methodologies have difficulties with this. Technologies are needed that can produce testcases which demonstrate a weakness. Formal verification is one that has been successfully used for this.
Read more
Inside Portable Stimulus — Hardware Software Interface
/by Leigh BradyThis blog series has stuck to what is in the Accellera Portable Stimulus 1.0 standard (PSS), but in this particular blog, we will deviate a bit. We will discuss a capability that did not make it into the first release of the standard, the Hardware Software Interface (HSI). It is a critical capability that now has the full attention of the Accellera Portable Stimulus Working Group (PSWG). Its absence results in extra work for companies that want to adopt Portable Stimulus tools without some form of this functionality.
The problem is easiest to understand by thinking about test portability. By that, we mean the ability to go from a single description of test intent and to execute that test, without modification, on a variety of execution engines. Those execution engines include simulators, running at either the transaction level or register transfer level (RTL), emulators, prototyping solutions, virtual platforms, and real silicon. Now, consider a test that needs to get data into a certain register or memory location or retrieve the contents of that register or memory to ensure that a test operated correctly.
Read more
Verifying AI Engines
/by Adnan HamidIt has been said that there are more than 100 companies currently developing custom hardware engines that can accelerate the machine learning (ML) function. Some target the data center where huge amounts of algorithm development and training are being performed. Power consumption has become one of the largest cost components of training, often utilizing large numbers of high-end GPUs and FPGAs to perform the task today. It is hoped that dedicated accelerators will be able to both speed up this function and perform it using a fraction of the power. Algorithms and networks are evolving so rapidly that these devices must retain maximum flexibility.
Other accelerators focus on the inference problem that runs input sets through a trained network to produce a classification. Most are deployed in the field where power, performance and accuracy are being optimized. Many are designed for a particular class of problem, such as audio or vision, and being targeted at segments including consumer, automotive or the IoT. Each restricts the flexibility that is necessary. Flexibility becomes a design optimization choice – the more that is fixed in hardware means greater performance or lower power but the software side is less amenable to change.
Read more
Think Like a (French Farmhouse) Bug
/by Dave KelfI found myself staying in a French farmhouse recently. While we devoured delicious pâté, baguettes and fromage, all swilled down with a bottle of delicious St. Emilion, a strange little beast appeared.
The mite flew in from the chimney and landed on my arm. Resting just long enough to flutter a wing, it then descended onto the pâté with the resulting and inevitable cry of alarm from the lady of the house.
I have never seen a bug quite like this one. Sort of moth-like, though larger, with scaled wings, legs that looked like talons, vicious red eyes, and I assumed some lengthy fangs, at that moment deeply inserted in its pâté prey.
Read more
Inside Portable Stimulus: Verification Efficiency
/by Leigh BradyIf verification were as hot a topic as artificial intelligence (AI), we would be measuring things like effective verification cycles per watt. Unfortunately, the only things that ever seem to be measured in the verification world are the engines on which simulations are performed. We never actually measure the effectiveness of verification itself, probably because there was, for many years, only one methodology in the industry –– constrained random stimulus generation.
Sure, there were competing libraries to help implement this methodology, but most of the time, the jockeying of those was more political than technical. All of them provided enough gain over the then incumbent methodology of directed testing. Everyone was happy.
Read more
Inside Portable Stimulus –– Maximizing the Emulator
/by Leigh BradyNobody puts a verification job onto an emulator without first making sure that the testcase is efficient and will make good use of such a valuable resource. In the last blog, we showed how to go from a Portable Stimulus Model and, using testbench synthesis technology, migrate that test from a transactional universal verification methodology (UVM) environment into one that generated code running on the embedded processors of a design. These processors can be instantiated into the emulator –– wait what? This is where you hear the needle being scratched across the record.
Read more
PSS and RISC-V – A Match Made In Verification
/by Adnan HamidThe industry is excited about RISC-V, and rightly so. It is enabling companies to take back control of their software execution environment without having to assume the huge responsibilities that come along with processor development and support of an ecosystem for it. Maybe a company wants to use a commercially developed core today, get the software developed and the processor integrated and then in a future generation, replace that with their own core. Perhaps they envision a range of products where the processor is tuned for each product in the family. There are so many possibilities that were out of reach in the past.
Along with the core, the RISC-V Foundation is attempting to develop a compliance suite. When run on an implementation, it will find out if all instructions work according to the specification. This is only the first stage of verification because this does not tell you that your implementation is always correct, only that it works in the cases tested by the compliance suite.
Read more
Inside Portable Stimulus – Introducing a Processor
/by Leigh BradySo far in this blog series, we have talked about some of the fundamentals of the Accellera Portable Test and Stimulus Standard (PSS) and how it can enhance a universal verification methodology (UVM) flow. This is a highly effective strategy for block-level verification and allows for reuse of existing Verification IP (VIP) models. However, as soon as a few design IP blocks are integrated together, it is almost certain that one or more processors will become a part of the subsystem. As soon as that happens, a new verification strategy is called for.
Read more
Methodology Convergence
/by Adnan HamidIt is unfortunate that design and verification methodologies have often been out of sync with each other, and increasingly so over the past 20 years. The design methodology change that caused one particular divergence was the introduction of design Intellectual Property (IP). IP meant that systems were no longer designed and built in a pseudo top-down manner, but contemplated at a higher level and constructed in a bottom up, ‘lego-like’ manner by choosing appropriate blocks that could implement the necessary functions.
In many ways, this construction approach also led us down the path of the SoC architectures with an array of processing elements tied together with busses and memory we see today. And, no surprise to anyone because it mimicked the architecture of discrete compute systems with everything moving onto a single piece of silicon. This approach also provided a natural interface and encapsulation for many IP blocks.
Read more
Multi-Dimensional Verification
/by Adnan HamidIt seems like ancient history now, but in the not so distant past, verification was performed by one tool – simulation; at one point in the flow – completion of RTL; using one language and methodology – SystemVerilog and UVM. That changed when designs continued to get larger and simulators stopped getting fast enough. Additional help became necessary in the form of emulators and formal verification, but that coincided with an increasingly difficult task of creating a stable testbench. It was no longer possible to migrate a design from a simulator to an emulator without doing a considerable amount of work on the testbench.
The increasing size and complexity of the design also made it necessary to think about verification as a hierarchy. You could no longer fit all of the design into a simulator and even if you could, it would be highly wasteful. It would make it too difficult and time consuming to get the levels of controllability and observability necessary for complete verification. Unfortunately, when a testbench is developed for a block, it cannot be fully reused when that block is integrated into a larger sub-system without significant re-work.
Read more