原创 Views on intellectual property (Part 6)

2014-11-12 17:07 1840 19 19 分类: 消费电子

The previous part of this experts' roundtable tackled the advantages EDA companies may have in developing and licensing IP. Today we turn our attention to IP integration issues. Taking part in today's discussion are: Mike Gianfagna, vice president of corporate marketing at Atrenta; Warren Savage, CEO of IPextreme; John Koeter, vice president of marketing for the solutions group at Synopsys; and Chris Rowen, Cadence Fellow and CTO at Tensilica.


Brian Bailey: There aren't many IP integration tools. Why is that?


Mike Gianfagna: We talk about IP-XACT, which appears to be a robust standard, but who's adopting it? How many people? It's interesting. In fact, we see a lot of customers that resist IP-XACT. I'm not sure why, other than change is hard. There are a lot of scripts out there for chip assembly today. More automation and sophistication is needed. But people change when forced to, not when they want to.


Brian Bailey: Are there no chip integration failures?


Mike Gianfagna: We have a product in this space that does RTL-level assembly, and we're seeing it pick up at a substantial rate. I'd like to say we were genius marketing people, but it's not that. People are starting to see the problem. When you're forced to adopt a new tool, you do, and not a minute before.


Warren Savage: I disagree, to some extent. People have been talking about automated IP assembly for 15 years, and I've never seen it adopted. People ask about IP-XACT, but I've never had a customer ask me for IP-XACT models. My theory is that the demand for sub-systems is growing and the complexity of sub-systems is growing faster than the EDA companies' ability to deliver a tool. These sub-systems involve a lot of different pieces that are not very standardised, not easily expressed with the IP-XACT format, plus you have the software component. The complexity curve on the sub-systems is growing faster than EDA can anticipate what they're going to look like.


Chris Rowen: The process of integrating these blocks is so messy and awkward that many users cannot visualise a standard methodology or set of tools that could do it. They think they need to do it themselves. A lot of people think that their problem is unique, and they're not willing to compromise their solution in order to move towards standards. The severity of the problem is tough enough that it is inevitable that they will eventually put themselves in a position where they can compromise.


I think the first step is getting people to adopt system-level modelling in a form that allows them to do both architectural exploration and find out about the traffic flow in their chip. Then that segues into becoming the verification structure in which the individual sub-systems can be buttoned up, and becomes the whole chip testbench in which you can do the last round of full verification at whatever level of detail you need. You need the ability to migrate and transmogrify those models to be the universal test bench for everything—for energy verification and signal integrity and logical correctness and software validation and all the other must-haves that constitute full-chip integration.


Brian Bailey: We have mentioned a couple of times the benefits that would come from a top-down flow. If we take power as an example, where is power specified? We would hope, at the high level, and then it pushes down as requirements into the blocks. But IP is part of a bottom-up process. How do they meet in the middle?


Chris Rowen: The old-fashioned way of doing worst-case power analysis is dead. The old-fashioned way is that you got a power specification for each of the building blocks, you found out what its worst-case power was, you added them all together, and that was your worst-case power. If you did that, you would conclude that no phone could ever work. That's not the way that power analysis is done. You have to look at 10 or 100 different scenarios that constitute the level of activity in each of the sub-systems that makes up a plausible use case. This whole process of getting from worst-case analysis, which is bottom-up, to scenario analysis, which is inherently top-down, is an essential part of that transition.


John Koeter: I think you will see software-driven power intent at the TLM level. I think that's inevitable. You won't be able to get absolute power, but you'll be able to get relative power. What you have to do is implement the various power settings that are within the standards. But then you can do things like have dual-rail architectures, you can put in voltage domains, you can do all that stuff to lower your overall power.


Chris Rowen: You have to characterise power when in state 0, 1, 2, 3, 4, and you have to characterise power across different workloads. In a power-sensitive processor design, the active power between the most intensive and least intensive sequence can be as much as an order of magnitude, and therefore it isn't meaningful to say the active power is X on this processor.


Warren Savage: It's a modelling issue that is the frontier here. How, at the system level, can these things be modelled? IP providers are going to optimise their particular IP around whatever techniques or standards are available. At the system level it will increasingly involve the software guys.

 

Brian Bailey

EE Times

文章评论0条评论)

登录后参与讨论
我要评论
0
19
关闭 站长推荐上一条 /2 下一条