SE: When we talk about AI, are we talking about real AI, or machine learning, or reinforcement learning? And what are the benefits and drawbacks of each?
Jackson: What we’re seeing with a lot of the applications today is machine learning. This is where a lot of the initial focus is. More general AI is coming downstream. From an application development standpoint, we’re creating applications that target specific problems, like printed circuit board layout. And then there’s another application that will help with functional verification, and a different application that can help with the exploration of PPA for digital IC design. It’s not just one tool fits all. It’s different tools for different applications.
SE: So this follows the divide and conquer approach that that designers have always taken?
Jackson: From an EDA standpoint, it’s a very natural extension of what has happened over time. There’s not just one EDA tool. There are many specialized tools, such as design-rule checking, functional verification, formal methods, and place-and-route. And what’s happening from an EDA perspective is that each of these types of product teams is investing In AI in order to do a job better than what they’re doing today and unleash more productivity for the end user.
Pan: It’s about taking some hardware parameters and tuning the entire flow. But each tuning of point tools has its own specific AI-driven applications and optimizations. There are all kinds of ML algorithms — supervised learning, semi-supervised learning, unsupervised learning, and reinforcement learning. It depends on the application.
Sumner: With the applications we have, at least in the near term, we’re thinking about what’s the problem we need to solve. It’s not just about scratch detection on a wafer. We’re seeing a need for data reduction, because there’s too much data in the field and only the anomalous data needs to come back. These are data problems, and so we’re applying AI techniques to it in order to see if that’s the best tool for the job. As we find the right ways to make engineers more productive, so they can come back in the morning and get their reports, then we’ll look at the right techniques to do it. But back to the question, ML is what we’re focusing on today.
Yu: Sometimes the human engineer may have explored all the possibilities, the test constraints, the small form factor, high density and high speed. So we are looking to AI to help us explore outside of the box and come up with a solution that we never thought about before, allowing us to accelerate our hardware design and optimize that design in a very unique way. I have high hopes that in the future AI can improve designs and push them to the next level.
SE: Is every company going to have its own customized AI system? If you think about tools in the past, they were created for many different designs and use cases. But with AI, we’re dealing with very customized approaches to whatever a company is doing.
Sumner: Our users can leverage the infrastructure we have to solve different problems. They each have different data they’re going to train in different datasets, but we have a robust enough infrastructure that’s flexible for accessing their data and the algorithms they want to run. Those are things people will need to adopt over time because it’s hard to mix and match data access and the algorithms.
Jackson: Large companies are going to be investing in AI internally to solve and optimize and deal with challenging problems. From an EDA standpoint, we’re looking at how to provide more general solutions to deliver value to our customers. You can have both. There are going to be internal AI investments companies are making, and EDA companies will be looking to provide solutions that scale and can be used across different companies to do a better job at solving those problems.
Pan: There is a lot of infrastructure available from bigger companies like Google, Facebook PyTorch, TensorFlow, and Microsoft, so you don’t have to start from scratch. I don’t think every company wants to create its own infrastructure or to re-invent the wheel. And with all this infrastructure, you can customize it and develop new algorithms based on that, like some of the fundamental software.
SE: But isn’t the whole goal of AI and machine learning to optimize whatever you have, so each one develops something a little different than everyone else?
Pan: That’s for the application itself. If you have genius engineers with various skills, programmers, mathematicians, and a lot infrastructure, then sure, those people will continue pushing the state of the art. But most users and small companies probably will just customize that for their own applications. You don’t need to reinvent that infrastructure.
Sumner: The base technology should work for everybody, and it can be customized for a particular application. Sometimes the vendor will have a ‘canned’ application, while in other cases they provide the infrastructure so that whoever the user is can solve their own problem. Both approaches are going to be important.
SE: That sort of goes back to the limits of using it in the tool, right? So the tool is the framework for how far ML or AI can venture outside of the box, because by staying within the box it should function as expected.
Yu: We had a very different application and we were trying to do more debug. Right now the debug is still done by humans. The engineer has to go to the lab and check a design piece-by-piece and make decisions based on the data. It’s a very time-consuming process. Sometimes your schedule requires you to release on a certain day, so you spend two weeks in the lab trying to identify an issue. We have not reached the point yet where AI can help us debug the real problem in a product— not just in a simulation, but also in the lab — and take the measurements, determine how to process the data, and conclude where the root cause might be. But I’m looking forward to seeing that happen.
SE: How do we know an AI system is working properly?
Jackson: For EDA, we have golden sign-off tools to confirm the chip that’s being designed is okay from a timing standpoint and from a design rule standpoint. That’s the best way to validate that any upstream techniques to accelerate or find a better solution are not introducing a problem.
Sumner: The key thing about adoption of this technology is trust. Do you trust that this is going to do the right thing for you? Trust is earned over time. There are a number of ways that it’s going to happen. Some of it is just by observing. We’re engineers, and we’d love to know how everything works, so we want to hear all of the details. But in the end, you get a certain comfort level. When you buy a car from a reliable manufacturer, you’re probably not looking at the test reports because you know the manufacturer has a good track record. That’s the way this technology is going to evolve. What we’re rolling out now is with a human in the loop, and they get to see what happens. If they trust this works, great. If it’s wrong, they get to make the model better.
Pan: Ultimately, the key question is why it won’t work. That’s what’s behind all the research in explainable AI. Sometimes it may be counterintuitive to you and it still works, but maybe you can make it work better. If it can help us to gain more insights, that’s interesting. Security and trust and explainability need to be kept in mind while we are deploying machine learning in a real system.
Address:
Singapore - 108 Keng Lee Road, #03-01, Keng Lee View, Singapore 219268
USA - Henderson, NV 89053,PHONE 510-209-9371
Hongkong - Flat/RM 1205, 12/F, Tai Sang Bank Building, 130-132 Des Voeux Road Central, Hongkong
Changsha - 3005 Unit A, Yage International, Yuelu District, 410000 Changsha