tag 标签: Verification

相关博文
  • 热度 20
    2014-1-23 18:47
    1540 次阅读|
    0 个评论
    I got interested in a recent post on Cadence's Interconnect Workbench (ICW) , a tool targeted towards the growing and evolving SoC verification market. Interconnect buses like AXI, AHB, and Wishbone form the backbone of SoC hardware. The throughput and latency of these buses in resolving requests from IPs has a major impact on SoC hardware/software performance. Having confidence in one's interconnect hierarchy and arbitration mechanism early in the flow is crucial to an SoC's success. The ICW tool has two flavours. One is an aid for performance verification, while the other is focused on functional verification. I was more inclined to dig deeper into the performance verification aspect. Essentially, the tool accepts traffic profiles for tuning traffic generators to pump traffic (data) into the interconnect from any of the masters plugged into the fabric. It is quite possible to develop behavioural models on top of the traffic generators, or verification IP modules to mimic TCP/IP streams or the block transfer of a file over the USB protocol, since these have predictable memory access patterns. By comparison, modelling CPU-based traffic is definitely a challenge, because this depends on the access pattern of the algorithm being executed. I believe a tool like ICW can help users overlap and synchronise traffic streams to increase concurrency in the system and span the worst- to best-case scenarios. This also makes it easier to study or verify arbitration systems and to play with the priority settings of the various IP blocks. Does this imply that verification engineers or system architects no longer need to develop test software for execution on the SoC (the design under test) to mimic application-level behaviour? Can architects extrapolate their findings at the interconnect IP level and project hardware-software performance for the SoC? If we look at the design verification flow—starting from a virtual prototype and taking things all the way to post-silicon—there is a definite need for test software that mimics application-level scenarios. The need varies from benchmark studies on performance models to IP verification and validation in the context of the SoC. Unfortunately, the ease with which bus traffic streams can be interleaved and/or synchronised to realise complex scenarios is missing when it comes to test software. Combining multiple test software modules with the intention of portraying complex application-level scenarios is definitely a harder task, since additional components like schedulers and timers are needed. Realising and retaining such scenarios for reuse as future test software is even harder if any additions, deletions, or modifications are made. Consider an example scenario of data on a USB pen drive being read and streamed out through Ethernet as packets. This is a classic producer-consumer example with intermediate memory buffers. Creating such a scenario using traffic streams is relatively simple when compared to the amount of effort required to program the two IP blocks to function and stream data into and out of memory, followed by an application layer that stitches the producer and consumer to implement flow control. Now imagine performing a similar activity with the combination of IP blocks from a portfolio of 30 to 40 other IP blocks to test all pertinent application paths. A big downside is that the application scenario developed becomes hardbound to the SoC and doesn't easily yield to reuse for further IP combinations. Exploring what-if scenarios using this work model is time and resource intensive, thereby ruling out performance verification. Does the above requirement point to an automation tool? I feel that there is scope for an automation tool to address this pain point—a tool that permits verification engineers to portray scenarios (be it with only one IP block or multiple IP blocks)—and to provide an easy path to combine these basic scenarios to create complex ones using a thread-model. We could also throw in some constrained random variables to explore multiple use cases for the same scenarios to obtain effective coverage. I can foresee applications for such a tool for verifying performance models, for functional verification, or for performing software driver development/testing during validation or during post-silicon validation to stress test the system. With the current industry focus on SoC verification productivity, such requirements are getting highlighted and addressed by a handful of EDA vendors. What remains to be seen is how well these tools can accept user intent and enable engineers in functional and performance verification. Srivatsan Raghavan Senior Architect Vayavya Labs  
  • 热度 18
    2013-11-21 15:51
    1329 次阅读|
    1 个评论
    Recently, I attended Cadence-Live in Bangalore to find out more about the latest and greatest in verification technologies. As expected, many of the sessions were geared towards System-on-Chip (SoC) verification using formal and simulation techniques. The main draw was the hardware-assisted (HA) verification session track. Yes, I use the word "hardware-assisted" since the word "emulation" is overloaded, confusing, and a misnomer. It looks as if the "Big Three" EDA vendors are gearing up for the next battle to capture the SoC verification market. Hardware boxes such as Palladium have been around for years. So, why the sudden buzz? My take? Two factors as follows: 1. The SoC verification domain has been in need of simulation horsepower with increasing integration of Intellectual Properties (IPs). Traditional software-based simulation, which has been the bulwark of verification, restricts what you can run and how much you can run on your design-under-test (DUT). Simulation time and memory has always been the Achilles heel for simulators. With short time-to-market (TTM) and stringent margins, verification folks don't want to wait for days for system-level simulation to generate results. The current ballpark figure for simulation speed is about 4MHz on these boxes. 2. The latest breed of Verification IPs (VIPs) meant for these boxes—referred to as Accelerated VIPs (A-VIPs)—with their UVM/OVM interfaces, makes it appealing for verification folks in look-and-feel with respect to traditional VIPs. My take is that the SoC verification folks will embrace the A-VIP approach with these boxes as long as testbenches don't require massive rework between software-based and HA-based simulation. Of course, not all is rosy on this path; RTL compilation time, debug, and coverage are upcoming challenges to be solved. Now that HA-based verification is on the rise at the SoC level, will it translate to the need for more low-level software running on the DUT during verification? This would translate to shorter SoC realisation time (the time required to integrate the hardware and software). If HA-based techniques show promise in reducing verification time for SoC integrators, then another avenue to reduce integration time is to demand production-worthy software drivers from IP vendors. As many of you may recollect, a decade back, SoC vendors like TI and Intel were hardly bothered about the software needs of an OEM vendor. An OEM vendor had to align with a software development firm or groom their team to create the necessary stack and applications. With short TTM coupled with intense market pressures and margins, SoC vendors bent over backwards and started to provide the necessary stacks and software development kits (SDKs). Pressure from market forces—from OEM vendors to SoC integrators—can be seen as a ripple effect in the software supply chain. With IPs becoming increasingly complex, programmable SoC integrators will expect IP vendors to provide production-worthy software drivers and other abstractions to reduce the time required for software integration and realisation. With some of the latest software stacks—like those used in Audio-Video Bridges (AVBs)—requiring certification of the underlying Ethernet-IP and software combination, IP vendors will have a lot to deliver and package going ahead. Is this a foreshadowing that IP vendors will soon be investing in HA verification boxes for verification and software development? Srivatsan Raghavan Senior Architect Vayavya Labs
  • 热度 13
    2013-7-25 16:07
    1867 次阅读|
    0 个评论
    We are reaching the end of the series of questions about IP that I have asked the industry. Last week I asked about IP theft and protection . One thing that has been happening in the IP market is that the average block size has been getting larger. In the early days of reuse, blocks were fairly small, but today many IP blocks are complete sub-systems carrying an extensive amount of software. I asked about the ease with which that software can be integrated. Below are the answers I received. John Koeter, VP marketing, solutions group at Synopsys We believe that software will continue to be an important part of providing an IP solution. Whether it is reference drivers, porting OSes, working with partners to create full solutions (for example full stacks or codecs) or providing transaction-level models, addressing the software challenges our customers face is a key value-add for us as an IP provider. Susan Peterson, product marketing group director, and Tom Hackett, senior product marketing manager, Cadence It's not as easy as it should be. Typically, the software that's provided are examples. There are hardware blocks and software blocks that talk to the hardware blocks via software drivers. People build upon and modify these examples. Eran Briman, VP marketing, Ceva At Ceva, we understand that the software is equally important to the hardware when providing processor IP. This is why we have invested significantly in our development environment, tools and software in recent years. We offer many pre-optimised software components for a range of end markets, including communications, audio, voice, imaging and vision. This software could be libraries to support LTE, Wi-Fi 802.11ac, DTV demod, computer vision etc, and fully optimised codecs for a wide variety of audio, voice and other applications, like Dolby, DTS, AMR, WB-AMR etc. Also, we maintain a robust 3rd party ecosystem of more than 50 active partners for software, tools, manufacturing etc that complement our product offerings. We have recently added significant efforts around the integration of our DSPs and software into processor architecture, which simplify the software development and improve the overall processor performance for target markets. Two examples of these software frameworks that we offer our customers include support for multi-core architectures (Ceva Must) and direct access to the DSP for software developers from the OS level, allowing developers to utilise the DSP to run software, offloading the CPU from these tasks. Arvind Shanmugavel, director of applications engineering, Apache Design One of the main advantages of using IP is the verification effort and software integration. IP is typically verified for functional accuracy by the IP provider, and also guarantees compatibility with the appropriate software. In general, SoC designers only need to worry about the higher level of integration for IP. So, a rather disappointing set of answers on this question. I had hoped to hear about ways in which the software could be integrated or adapted for customer needs, standardised interfaces into OSs, and other help with this aspect of integration. Perhaps it is because this is targeted at the software guys that our industry is just not really interested in it. What do you think? Can we, should we, be doing a better job with software IP? Brian Bailey EE Times
  • 热度 15
    2013-7-16 19:39
    2100 次阅读|
    0 个评论
    With the help of five experts, I considered some questions around verification of the IP. How much verification of the IP blocks should the licensor and the licensee companies have to do? Will this change in the future? What verification IP should be shipped with the design IP? Should the same companies supply the design IP and verification IP, or does that present a significant risk to the licensee of insufficient coverage and in-the-field failure? I went to several experts on the subject and here's what they had to say: Michael Munsey, Director of Product Management, Semiconductor Solutions at Dassault Systèmes: Companies will still need to verify all IP. There are numerous stories of customers receiving IP that still has bugs in it. Even with verification IP shipped with the design IP, there really needs to be a level of checking to insure the quality of both. If VIP and DIP come from different sources, it will still need to be validated in-house. It is also necessary, but may become even more difficult, to tie verification back to the original requirements. The key with IP assembly is to ensure that the assembled system still meets all of the requirements. Neill Mullinger, Verification IP Product Marketing Manager, Synopsys: Developers of the IP blocks need to extensively verify the IP in all configurations and variants that may be used, to verify compliance to the relevant specifications or protocols. Consumers of the IP blocks should not have to be concerned with compliance of proven cores from a trusted source. That should have been completed as part of the core's progress to certification or signoff. Re-running compliance testing is massively time consuming and redundant. The blocks do, however, need integration testing. While integration test is simpler, it is certainly not trivial; verification teams still need to run through steps to configure, enumerate, train the core and send sufficient real-life traffic, common error cases, and relevant application-specific traffic to verify integration is successful and performance goals are being met with the core's configuration. Susan Peterson, Product Marketing Group Director and Tom Hackett, Senior Product Marketing Manager, Cadence: The amount of IP verification that a company has to perform depends on a variety of factors, including: * The maturity of the IP. * The availability of use cases documenting other customers' experiences with the IP (the more use cases that have been tested, the less verification needed). * The verification methodology used by the IP vendor (and whether they are willing to share this). * The level of customisation performed on the standard IP block; more customisation calls for more verification by the chip designer. Ideally, verification IP should ship with the design IP, to give designers a starting point and a sanity check that they have, indeed, hooked up the IP correctly. Chip designers should seek detailed verification data from their IP providers, so they can gather a quantifiable measure of how well the IP meets their required specs right out of the box. And, if a single company provides both the verification and the design IP, the designer should ensure that the development was completed by different organisations within that company, so they can be assured that the organisations have cross-checked each other's work, not just their own. Bernard Murphy, CTO of Atrenta: In addition to verification IP, we should be thinking more about IP shipped with more assertions, especially low-level assertions. In-situ verification is still a big challenge. If something misbehaves, is that an IP problem, a usage problem, or a misunderstanding problem? This can be really difficult to track down without access to the whole system unless the IP is "salted" with a decent number of low-level assertions which can provide a quick diagnosis. Arvind Shanmugavel, Director of Applications Engineering, Apache Design: Soft IP verification standards are well known in the SoC design community. However, electrical verification for reliable operation of any hard IP is still evolving. IP providers need to provide proper guidelines for verification at the SoC-level. For example, verifying ESD integrity for hard IP can only be done at the SoC-level. Proper guidance needs to be given by the IP provider regarding the ESD scheme used and the placement of protection devices.   Brian Bailey EE Times  
  • 热度 19
    2013-2-10 01:08
    5786 次阅读|
    1 个评论
    As we all know the semiconductor market looks good now days and lots of companies are hiring talents for 2013-14 projection. If we keep this projection in mind, we could say there would be a business for the companies for upcoming years. Now, the billion dollar question: ‘ how about the resources ’? We have been observing from last few months that companies are struggling in finding good experienced/skilled resources. Usually product/service companies are playing with smart approach of having hierarchical engineering model where there are few junior/college graduates engineers working under guidance of one senior/Lead engineer. Approach looks smart and efficient to utilize resources appropriately and makes engineering execution cost effective too! At the same time executive and managers always thinks on cost effective and efficient engineering model to fulfill dynamic time to market! Many companies are working with different approach to utilize and manage engineering resources efficiently. There are cases or project specific critical requirements where you don’t have time to train your junior resources, in this type of requirement; you cannot take a chance of using junior level resource, if you do so, you might lose the schedule, budget and at the end market which is very important! There could be a requirement where you don’t need all your skilled resources to work on. For this type of requirement you can create a hierarchical approach of your engineering model to use the skillset efficiently. (This is where the engineering management comes in picture, Managers; Executives have to take efficient decisions to move forward in this type of situation otherwise in long go, you could see impact on quality and customer) Now a day, most requirements and needs from the product based companies are for senior engineers, considering the critical project requirement and quick execution to make the products ahead in the market. Companies are feeling shortage of skilled resources. At the same time people don’t move easily! Question here is; does this create shortage of experienced skillsets ? When market is in good shape, revenue and financial graph is growing mostly all the companies are in need of skilled man power and that’s where competition comes in picture for companies and engineers too! There can be different type of situations where market is not in good shape and companies are struggling to get the business. In these types of situation Smart companies usually starts investing in engineering man power to develop the skillsets keeping projection in mind, by the time skilled requirement comes, company already have skilled/trained resources ready to jump in to! There can be other type of situation where market might or might not be in good shape. Irrespective of market Situations Company sometime announces on ‘ restructuring the engineering force ’ which means company directly or indirectly fires engineers as a part re-structuring. Mostly this situation comes because of shortage of business for company (particular business division of company). These types of situations are still debatable and require dynamic changes based on the market, need, requirements etc…  But still the question remains same at any point of time ‘ Are Companies Running Short Of Business or Short Of Resource? ’    Thanks, Ankit Gopani (ASIC With Ankit)
相关资源