I got interested in a recent post on Cadence's Interconnect Workbench (ICW), a tool targeted towards the growing and evolving SoC verification market. Interconnect buses like AXI, AHB, and Wishbone form the backbone of SoC hardware. The throughput and latency of these buses in resolving requests from IPs has a major impact on SoC hardware/software performance.
Having confidence in one's interconnect hierarchy and arbitration mechanism early in the flow is crucial to an SoC's success. The ICW tool has two flavours. One is an aid for performance verification, while the other is focused on functional verification. I was more inclined to dig deeper into the performance verification aspect. Essentially, the tool accepts traffic profiles for tuning traffic generators to pump traffic (data) into the interconnect from any of the masters plugged into the fabric. It is quite possible to develop behavioural models on top of the traffic generators, or verification IP modules to mimic TCP/IP streams or the block transfer of a file over the USB protocol, since these have predictable memory access patterns. By comparison, modelling CPU-based traffic is definitely a challenge, because this depends on the access pattern of the algorithm being executed.
I believe a tool like ICW can help users overlap and synchronise traffic streams to increase concurrency in the system and span the worst- to best-case scenarios. This also makes it easier to study or verify arbitration systems and to play with the priority settings of the various IP blocks.
Does this imply that verification engineers or system architects no longer need to develop test software for execution on the SoC (the design under test) to mimic application-level behaviour? Can architects extrapolate their findings at the interconnect IP level and project hardware-software performance for the SoC?
If we look at the design verification flow—starting from a virtual prototype and taking things all the way to post-silicon—there is a definite need for test software that mimics application-level scenarios. The need varies from benchmark studies on performance models to IP verification and validation in the context of the SoC. Unfortunately, the ease with which bus traffic streams can be interleaved and/or synchronised to realise complex scenarios is missing when it comes to test software. Combining multiple test software modules with the intention of portraying complex application-level scenarios is definitely a harder task, since additional components like schedulers and timers are needed. Realising and retaining such scenarios for reuse as future test software is even harder if any additions, deletions, or modifications are made.
Consider an example scenario of data on a USB pen drive being read and streamed out through Ethernet as packets. This is a classic producer-consumer example with intermediate memory buffers. Creating such a scenario using traffic streams is relatively simple when compared to the amount of effort required to program the two IP blocks to function and stream data into and out of memory, followed by an application layer that stitches the producer and consumer to implement flow control.
Now imagine performing a similar activity with the combination of IP blocks from a portfolio of 30 to 40 other IP blocks to test all pertinent application paths. A big downside is that the application scenario developed becomes hardbound to the SoC and doesn't easily yield to reuse for further IP combinations. Exploring what-if scenarios using this work model is time and resource intensive, thereby ruling out performance verification.
Does the above requirement point to an automation tool? I feel that there is scope for an automation tool to address this pain point—a tool that permits verification engineers to portray scenarios (be it with only one IP block or multiple IP blocks)—and to provide an easy path to combine these basic scenarios to create complex ones using a thread-model. We could also throw in some constrained random variables to explore multiple use cases for the same scenarios to obtain effective coverage.
I can foresee applications for such a tool for verifying performance models, for functional verification, or for performing software driver development/testing during validation or during post-silicon validation to stress test the system.
With the current industry focus on SoC verification productivity, such requirements are getting highlighted and addressed by a handful of EDA vendors. What remains to be seen is how well these tools can accept user intent and enable engineers in functional and performance verification.
Srivatsan Raghavan
Senior Architect
Vayavya Labs
文章评论(0条评论)
登录后参与讨论