Security: Making the Unknown, Known
There is no known way to guarantee that a system is secure. Vulnerabilities exist in hardware, software, and throughout the supply chain. They may exist by accident or by ignorance. They may have been inserted maliciously, or they may utilize some mechanism never before considered part of the attack plane. There is no one method by which all these potential issues can be addressed, and no tool that can find them all.
Security has often been likened to a castle. You build layers of protection and while you expect some to be breached, you make it increasingly difficult to get the ultimate prize. Verification has a role to play in exposing vulnerabilities, but the problem is that entrenched dynamic verification methodologies have difficulties with this. Technologies are needed that can produce testcases which demonstrate a weakness. Formal verification is one that has been successfully used for this.
An assertion can be created that says a key can never be recovered on an external port. The formal verification tool is then asked to find a way to prove the assertion to be false and, if it can do that, produces a trace showing how it can be made to happen. Formal verification, if it manages to run to completion, is exhaustive, meaning there is no possible way to get the key, using functional means.
The problem is that a formal tool must explore every possible state in the implementation of a design and every possible input, and this quickly suffers from state-space explosion. The tool attempts to solve the entire problem in a single run. As the system gets larger, the size of the verification problem grows exponentially meaning that formal falls short for system-level analysis and is relegated to block-level verification most of the time.
Looking now at existing dynamic verification methodologies, simulation, emulation and prototyping tackle one use case at a time. Execution times scale in a reasonable manner from the block level through to the system level. The major flaw in this methodology is that someone must predict every vulnerability and write a testcase for it. It requires an engineer to think of all possible ways in which an attack may occur. They are limited by what they can think of and thus many unknowns may remain hidden.
At Breker Verification Systems, we propose a solution that combines the exhaustiveness of formal and the scalability of dynamic verification. There exists a dynamic verification methodology that can utilize simulation, emulation or even real silicon, where the testcases are created automatically from a description of verification intent. It provides the exhaustiveness of formal analysis but presents the tests in a manner that can be executed serially in a dynamic execution environment. The number of test cases produced may be considerable, but the solution is tractable.
The state space that needs to be explored is reduced because the solution works from an intent description rather than an RTL description of the implementation. This description is abstract and hierarchical but is also complete and formal. Rules from each level of security are captured in an abstract but precise manner and these independent rules can then be chained together into flows.
Test Suite Synthesis, entered through the Accellera Portable Stimulus Standard (PSS) language, works much like a formal verification tool in that it starts from a verification objective and attempts to find a way to make that happen. The tool flattens the security rules, applies formal technology to find every reachable path through them and automatically generates a test case for each path individually. Multiples of these testcases can be combined into a single simulation or emulation run.
As an example, Breker has built an app that helps people uncover security vulnerabilities, particularly in a Hardware Root of Trust (HRoT) built into an SoC Fabric. There are two parts to it: first, the model of verification intent and second, security rules defined in a tabular form. It is the second part that is the key to our security app.
Many embedded systems are built around the notion of a set of masters talking to a set of slaves across a fabric. The verification model defines the key dataflow and control flow aspects of such a system. What is more difficult to define are the security restrictions surrounding access control. Which masters are allowed to talk to which slaves? Which modes must the masters and slaves be in for a valid exchange of data? Which address ranges are considered to need protection? Trying to define all these conditions in a PSS model is difficult and even having to describe a state machine that does this without error is complex and error prone.
What we have done instead is to provide a very simple way to divide and conquer. In the previous paragraph, we defined independent aspects of the access control we want the system to adhere to. It is a fairly simple task to define each of those mechanisms in tabular form as shown in Figure 1.
Each table precisely defines one aspect of the required security. Those aspects are then connected into a flow. In this example, it is a simple linear flow indicating that each rule must be satisfied in a serial manner.
The TrekApp automatically turns these tables into a PSS model as shown in Figure 2.
Note the added level of complexity when looking at the overlay of these rules. The tables provided a clear and concise input mechanism that separated issues and made it easy to see missing elements in the tables. That would be a lot more difficult to do in the expanded graph.
Now we can use the power of the solver in the Test Suite Synthesis engine to explore all combinations of the required access control. It is possible that some of the paths may be marked as unreachable, which means that the solver cannot find any way to perform what would have been an illegal access. This is good. Others may result in what it considers to be a counter example to show that an illegal access is possible. That can be simulated and may show that details in the implementation do prevent it from happening. This, too, is an important coverage goal that can be checked off. If one of the testcases should succeed, a vulnerability has been found by the tool that can be fixed in the design.
This approach is not exhaustive in terms of covering every detail of the design, but it can help close the gap and do it in a simple, methodical manner right across a full system. Tools like this enable design and verification teams to uncover vulnerabilities in their system. This particular TrekApp was developed for a real customer, a real design with real concerns. We are listening and learning along with them. Together we can do this.