Mentor's Harry Foster continues to release instalments of the Wilson Research study results, and in this installment, Foster talks about some of the system-level aspects of chips, such as the number of clock domains—which have basically stayed level over the past 10 years—and the verification of power management and other power-related aspects of a design. This topic is interesting because it is the first time that a piece of functionality has truly existed at the system level that cannot be fully verified at the block level.
Sixty-seven per cent of designs now contain power management, although many of these still use very basic mechanisms, such as clock gating. But I'm not talking about basic mechanisms here. These mechanisms can be verified at the block level and present no management capabilities up to the system level.
Let me give you the surprising numbers first, followed by my interpretation of them. Over 30 per cent of the companies in the survey that actively manage power spend less than 10 per cent of their time doing power-aware simulation, and almost 80 per cent of those prefer to use directed testing to accomplish it.
First, I am not surprised that the total verification time spent on power management is low. Let's face it, power management is only a small part of the functionality of the chip; if it were a large percentage of the chip's functionality, it would indicate a very serious problem. I'm actually surprised at how many companies do report spending significant amounts of time on verifying power management. For example, about 7 per cent of the companies say they spend 30 per cent to 39 per cent of their time on this one problem and many report spending even more. I have to believe that these companies did not understand the question or were answering as to their individual engineer responsibility. If a large number of power-verification people were responding to the survey, it could skew the results; but I do wonder about the 30 per cent of respondents who claim to be spending 20 per cent or more of their time on this one task.
If we look at how respondents are accomplishing this verification, we see that almost 80 per cent of them are using directed test methods. Thirty per cent report using constrained random, and I have to believe that most of these are concentrating on the power functionality at the block level. Constrained random has a lot of difficulty dealing with system-level functionality and as far as I know, there are no constrained random generators that take power intent into account.
As Harry's blog states, "Since the power intent cannot be directly described in an RTL model, alternative supporting notations have recently emerged to capture the power intent." To attempt to verify power intent using constrained random would be a very inefficient methodology and so directed testing is one of the few methods available to them. Today, this could be stated more broadly by saying that any system-level verification is likely to use directed tests because processors are involved and constrained random does not handle processors well.
We are beginning to see solutions emerge that can systematically create verification code that will run on the processors, and that means that while system-level constrained random will be with us in the future, it will look very different from the constrained random of today.
How much time are you spending on power-management verification and what methods do you use?
Brian Bailey
EE Times
文章评论(0条评论)
登录后参与讨论