SE: Is there a separate security sign-off that is done in addition to everything else? Has that ever been implemented in the EDA flow?
Borza: It’s starting to be.
Kelf: We have a couple of customers who specifically have security sign-off in their flow right now. Interestingly, they have it at the end of their chip development, but they also have it for all the IP they’re bringing in from a variety of different places. And now we’re looking at things like RISC-V, with a nested sort of IP being brought into some of these designs with some of these new processes, as well. Security sign-off has to be multifaceted. It has to include IP as it’s being brought in, and at the end of the design, as well. And it has to happen throughout the whole process, with different sign-off gates. I can think of three customers that have pretty rigorous sign-off at tape-out of the RTL, and then later on, as well.
Hardee: We know of major device manufacturers who are rigorously putting security requirements on their suppliers. One area where we’re seeing that is NFC chips, and all methods of payment for mobile. That will grow. Everyone makes the comparison to functional safety, looking for the equivalent of the ASIL levels or ISO 26262. That’s definitely an area that a lot of people are exploring to see what the standards could be. But you can’t take that analogy too far. There’s no ‘meet this number, and you’re good’ that can be strictly applied to security at this point in time. There are databases, like MITRE.org’s CWE (common weakness enumeration), and these kinds of things that help, but they’re by no means comprehensive. A lot of people are going to be keeping some known security vulnerabilities to themselves, while others are going to be sharing them. A lot of the work done by academia to expose security vulnerabilities is extremely valuable in adding to this, but we really don’t know what we’re testing against to tick in the box and say, ‘Yeah, you’re good.’ The threat is always changing, and it can’t easily be captured in a standard or a list of vulnerabilities. That’s one of the things that is annoying and exciting at the same time about this domain.
SE: One of the solutions proposed is a digital twin. But what happens if someone hacks a digital twin?
Hallman: Then your system is either compromised on the analysis aspect, or on the real aspect, but there’s a mismatch. It requires further investigation to really resolve that type of mismatch. Keeping the digital twin alive — keeping it during operation and being able to continue that analysis — allows you to continue analyzing the data that’s coming in from the real system, monitoring the real system as well as the digitized system. It’s still a comparison that you can leverage to help your security.
SE: So you have a device that has to be updated throughout its lifetime to fend off new threats?
Borza: Absolutely. And now you’re starting to see attempts to monitor the population of devices, looking at what the overall population behavior is, look at what the subsets are. If you’ve got location information, you may be seeing trends emerging at certain locations, because things in the same geographic region are being hacked or succumbing to some kind of attack. You start to see the power of being able to network these things together and go beyond the chip or the individual isolated system, and connect them up via the internet and look at their behaviors globally. That gives you a very powerful tool to start building on as a means to respond to these kinds of things. But the notion that you can ship something and forget about it really has to go away for most applications. If it’s connected to the network, its security just degrades over time relative to the state of attack technology. And so it’s necessary to be able to update those things and to monitor the behavior, looking for signs that a device is being exploited
SE: So it’s no longer just perimeter security. It’s now it’s a four-dimensional problem, right?
Borza: Yes. If you look at how the automobile is evolving, it’s basically a rolling network. Some people say it’s turning into a rolling supercomputer, but it’s more than a supercomputer. It’s a network of very powerful computers, and it’s now being connected to the outside world in real-time. You still have the possibility of people trying to get things in to exploit it in real-time, but there’s also the notion that you can get something onto the system and have it start working its way through the system to penetrate it while it’s in motion or just sitting there in your driveway.
Hardee: The whole idea of being able to get into the supply chain with hardware Trojans was a big motivator for the CHIPS Act. There are national security aspects to being in charge of your own manufacturing destiny. It’s a huge consideration.
Kelf: With automotive, one client we were talking to has a whole range of self-test stuff already built in the chips for safety, and they’ve been starting to insert that for security. They call it BIST, but it stands for built-in security testing. They’re inserting the security programs, which run on the processors when they’re idle, to double check the hardware hasn’t somehow changed. And they run comparisons against the hardware, especially FPGA hardware, that might be being updated in real-time based on DSP-type stuff and things going on, just to check that no Trojans have been slid in or that something hasn’t been brought in from the network outside. They’re trying to figure out what these comparison programs look like. Do you have a digital twin in some form as a comparison? Or do you have some original design where you can run an equivalency check on it? There are all these different ideas going on about how to identify a design containing a Trojan that has been attacking an automotive design.
Hallman: There are lots of architectural decisions made during design that you may want to build into your system for that type of monitoring to detect. Some will have a type of recovery, including rudimentary error correction-type algorithms. But that ability to detect and respond to threats in real-time is now becoming part of your development cycle, and part of your security profile of what you have to do at a chip level and for the system.
Karazuba: From a from an AI perspective, when you talk about deployment of any kind of self-driving, which is one of the predominant uses of AI in automobiles today, there’s a tremendous amount of discussion about the security of data as it’s moving through a system. It seems rudimentary, but you want to make sure the data coming from a camera or from a lidar is real, and it’s not somehow being touched along the way. You can imagine the threat profile of the sensors of a car being hacked prior to it actually getting to the to the ADAS processing. Automakers are putting a lot of work into securing the AI processing and the data within that system. And it’s only just going to get larger and larger.
Hardee: One bonus of that we’re seeing is the research going into things like homomorphic encryption, where you’re operating on encrypted data. You never decrypt. The compute power needed to do that is pretty extreme, but it’s something that’s in the works.
Borza: As a corollary to that, the training data is also being gathered in real-time and communicated to the manufacturer to develop new models, and upgrade their models over time, and ensuring that data is communicated securely and that the security on it is maintained. Once you get into AI, you’re actually talking about being able to put Trojans into the models that are triggerable behaviors. But you can produce a desired response, even as a rare event, and subsequently trigger that later. That’s one of the big concerns about AI — the lack of observability at this point, and controllability of the AI when it goes off the rails and takes a wide left turn into unknown territory.
Karazuba: In 2018, there was a huge push to secure the training models for the exact reasons that we’re talking about. We’re seeing those training models are increasingly secured. We’re also seeing a lot evidence now that security on the inference end is increasing. That’s a good thing for anyone who walks anywhere near a car, or drives a car, or exists in modern society.
SE: The whole idea behind AI is that your system is going to optimize for whatever you’re trying to do. You have subtle shifts that may or may not be legitimate, but you don’t necessarily know that. How do you verify this thing is functioning in the right way?
Borza: It’s interesting to watch ChatGPT, because they ended up with a viral hit on their hands, and now they’re going through this in real time. There were a lot of things that they knew were issues and that they would have to deal with eventually, but I don’t think they were planning to have to deal with them as quickly as they are now. They rolled it out as a kind of advanced research prototype, just to see what would happen and probably to gather more training data and find out how real users will try to use this thing. What they found is that a whole bunch of them are trying to probe the edges of what it’s capable of to see if they can push it into some form of unacceptable response or some portion of the response envelope that the ChatGPT people weren’t planning on having them go to. So now they have to adapt and figure out how to try to put it back in the box. We’re seeing this going on in real-time. They’re really scrambling to put a lot of resources into that. It’s a natural consequence of having something that’s very appealing to people, and for masses of people to go and experiment with. And then some subset of them is really interested in seeing what kinds of perverse behaviors you can get. And we’re able to watch it all because it’s happening in very public ways.
Address:
Singapore - 108 Keng Lee Road, #03-01, Keng Lee View, Singapore 219268
USA - Henderson, NV 89053,PHONE 510-209-9371
Hongkong - Flat/RM 1205, 12/F, Tai Sang Bank Building, 130-132 Des Voeux Road Central, Hongkong
Changsha - 3005 Unit A, Yage International, Yuelu District, 410000 Changsha