In the previous instalment of this experts' roundtable, we discussed IP integration issues. We now conclude this series by looking at the issue of system-level modelling. Taking part in today's discussion are: Mike Gianfagna, vice president of corporate marketing at Atrenta; Warren Savage, CEO of IPextreme; John Koeter, vice president of marketing for the solutions group at Synopsys; and Chris Rowen, Cadence Fellow and CTO at Tensilica.
Brian Bailey: Are we at the point where enough high-level models exist for people to do system modelling?
Chris Rowen: I think a lot of pieces are coming together, but it's not a no-brainer yet.
Warren Savage: John and I recently had a question: "Where are my SystemC models?" I asked, "Are you willing to pay extra for those models?" The customer's response was "No, that should be part of the price." I think we're still in the space where the value proposition for people to be willing to pay extra money to feed the development side of things is not yet there. I think it's coming, but it's a slow slog.
John Koeter: It's definitely coming, though.
Savage: I think it will probably happen for the things where there's a lot of volume, but then we're going to have the fringes, the verticals, that are not economical unless there's a compelling reason.
Koeter: You also have to ask, "What is the system-level model." Is it a loosely timed model? Is it cycle accurate? Is it cycle approximate? These terms get thrown around with not the best definitions.
Mike Gianfagna: That's a tremendous problem. You trade accuracy for speed. You cannot run software with a cycle accurate model.
Bailey: Isn't there enough demand to clarify what is necessary?
Koeter: For us, it is very clear. Our models, generally speaking, are loosely timed, and with partners, we have access to their fast models. We create models for our own IP. We also have another tool which is more for platform architecture. For that, you need at least timing approximate, if not cycle accurate, so it's different needs. Are you looking at your architecture, or are you looking at creating a virtualized model of your SoC for software development? Those are two totally different scenarios—closely related, but different.
Rowen: The ideal is get to a system in which there is one big knob—the big knob that says the level of accuracy for each block. Given that it's usually performance versus level of detail, people can systematically work from the software scenarios down through these cycle-by-cycle scenarios without switching tools or methodologies. I think that this is a useful by-product of some of the consolidation which has taken place. It is likely to be a barrier for some of the small shops if they have to have this range of models. But this seamless model that scales from being fast enough so somebody can boot an operating system to accurate enough so that they can figure out the picosecond-by-picosecond power is ideally what you want.
The fact that somebody needs something which is new and differentiated and important is the first step towards getting a real industry shift towards making this kind of support for modelling at every level, whether it's completely in simulation or it's emulated or it's in FPGAs—just so long as it's seamless is really what's critical.
***
And with that, we conclude our discussion on IP. Do you use system-level modelling, or do you think it has enough value that you would be willing to pay for the creation of models? What do you use it for, and what level of models would be the most valuable to you?
Brian Bailey
EE Times
文章评论(0条评论)
登录后参与讨论