热度 22
2013-8-27 20:13
1817 次阅读|
0 个评论
The blog series by Harry Foster of Mentor Graphics contains lots of really valuable information about trends in functional verification. The studies he discusses are very useful in helping me track how the industry is evolving. Being able to answer questions such as how much time engineers spend performing verification and which languages are being adopted the most can ensure that engineers get the right tools for the future. However, Foster's blog a few weeks ago immediately raised my eyebrows—not because of what it said, but because of what it didn't say. This chart from the 2012 Wilson Research Group study shows adoption trends from 2007 and 2012. One would think that technologies such as code coverage, functional coverage, and assertions were being adopted rapidly. Oops. That's not quite the case. In the other blogs in this series, Foster had been comparing results from the 2012 study with results from 2010. To me, the switch to a comparison with 2007 results seemed highly suspicious. Unluckily for Foster, the Internet is persistent. This graph shows results from a 2010 study. Let me turn those two charts into one. Code coverage dropped 2 per cent in two years. The use of assertions dropped 6 per cent, and use of functional coverage dropped 6 per cent. Mentor claimed the overall confidence level of the 2010 study was 95 per cent with a margin of error of 4.1 per cent. For 2012 study, the overall confidence level was 95 per cent with a margin of error of 4.05 per cent, so the differences between 2010 and 2012 are basically in the noise. Rather than dealing with the small but declining percentages as signs of a maturation and potential saturation of the industry for constrained random test-pattern generation, the blogs attempted to paint a rosy picture of adoption. This flattening of adoption is an important trend and actually supports the types of things that I hear real engineers telling me. I hear them talking about the increased difficulties associated with creating functional coverage points, not being able to use constrained random for SoC-level verification, and frustration with assertions. These numbers indicate, not mass migration, but that all is not well, and that EDA vendors need to be looking in other directions for their next generation of tools. Is your use of any of these technologies changing? Perhaps you know of other reasons why these numbers have become flat. Brian Bailey EE Times