tag 标签: integration

相关博文
  • 热度 17
    2014-11-12 17:24
    1808 次阅读|
    0 个评论
    In the previous instalment of this experts' roundtable we covered the advantages EDA companies may have in developing and licensing IP. Today we turn our attention to IP integration issues. Taking part in today's discussion are: Mike Gianfagna, vice president of corporate marketing at Atrenta; Warren Savage, CEO of IPextreme; John Koeter, vice president of marketing for the solutions group at Synopsys; and Chris Rowen, Cadence Fellow and CTO at Tensilica. Brian Bailey: There aren't many IP integration tools. Why is that? Mike Gianfagna: We talk about IP-XACT, which appears to be a robust standard, but who's adopting it? How many people? It's interesting. In fact, we see a lot of customers that resist IP-XACT. I'm not sure why, other than change is hard. There are a lot of scripts out there for chip assembly today. More automation and sophistication is needed. But people change when forced to, not when they want to. Brian Bailey: Are there no chip integration failures? Mike Gianfagna: We have a product in this space that does RTL-level assembly, and we're seeing it pick up at a substantial rate. I'd like to say we were genius marketing people, but it's not that. People are starting to see the problem. When you're forced to adopt a new tool, you do, and not a minute before. Warren Savage: I disagree, to some extent. People have been talking about automated IP assembly for 15 years, and I've never seen it adopted. People ask about IP-XACT, but I've never had a customer ask me for IP-XACT models. My theory is that the demand for sub-systems is growing and the complexity of sub-systems is growing faster than the EDA companies' ability to deliver a tool. These sub-systems involve a lot of different pieces that are not very standardised, not easily expressed with the IP-XACT format, plus you have the software component. The complexity curve on the sub-systems is growing faster than EDA can anticipate what they're going to look like. Chris Rowen: The process of integrating these blocks is so messy and awkward that many users cannot visualise a standard methodology or set of tools that could do it. They think they need to do it themselves. A lot of people think that their problem is unique, and they're not willing to compromise their solution in order to move towards standards. The severity of the problem is tough enough that it is inevitable that they will eventually put themselves in a position where they can compromise. I think the first step is getting people to adopt system-level modelling in a form that allows them to do both architectural exploration and find out about the traffic flow in their chip. Then that segues into becoming the verification structure in which the individual sub-systems can be buttoned up, and becomes the whole chip testbench in which you can do the last round of full verification at whatever level of detail you need. You need the ability to migrate and transmogrify those models to be the universal test bench for everything—for energy verification and signal integrity and logical correctness and software validation and all the other must-haves that constitute full-chip integration. Brian Bailey: We have mentioned a couple of times the benefits that would come from a top-down flow. If we take power as an example, where is power specified? We would hope, at the high level, and then it pushes down as requirements into the blocks. But IP is part of a bottom-up process. How do they meet in the middle? Chris Rowen: The old-fashioned way of doing worst-case power analysis is dead. The old-fashioned way is that you got a power specification for each of the building blocks, you found out what its worst-case power was, you added them all together, and that was your worst-case power. If you did that, you would conclude that no phone could ever work. That's not the way that power analysis is done. You have to look at 10 or 100 different scenarios that constitute the level of activity in each of the sub-systems that makes up a plausible use case. This whole process of getting from worst-case analysis, which is bottom-up, to scenario analysis, which is inherently top-down, is an essential part of that transition. John Koeter: I think you will see software-driven power intent at the TLM level. I think that's inevitable. You won't be able to get absolute power, but you'll be able to get relative power. What you have to do is implement the various power settings that are within the standards. But then you can do things like have dual-rail architectures, you can put in voltage domains, you can do all that stuff to lower your overall power. Chris Rowen: You have to characterise power when in state 0, 1, 2, 3, 4, and you have to characterise power across different workloads. In a power-sensitive processor design, the active power between the most intensive and least intensive sequence can be as much as an order of magnitude, and therefore it isn't meaningful to say the active power is X on this processor. Warren Savage: It's a modelling issue that is the frontier here. How, at the system level, can these things be modelled? IP providers are going to optimise their particular IP around whatever techniques or standards are available. At the system level it will increasingly involve the software guys.   Brian Bailey EE Times  
  • 热度 19
    2014-11-12 17:07
    1858 次阅读|
    0 个评论
    The previous part of this experts' roundtable tackled the advantages EDA companies may have in developing and licensing IP. Today we turn our attention to IP integration issues. Taking part in today's discussion are: Mike Gianfagna, vice president of corporate marketing at Atrenta; Warren Savage, CEO of IPextreme; John Koeter, vice president of marketing for the solutions group at Synopsys; and Chris Rowen, Cadence Fellow and CTO at Tensilica. Brian Bailey: There aren't many IP integration tools. Why is that? Mike Gianfagna: We talk about IP-XACT, which appears to be a robust standard, but who's adopting it? How many people? It's interesting. In fact, we see a lot of customers that resist IP-XACT. I'm not sure why, other than change is hard. There are a lot of scripts out there for chip assembly today. More automation and sophistication is needed. But people change when forced to, not when they want to. Brian Bailey: Are there no chip integration failures? Mike Gianfagna: We have a product in this space that does RTL-level assembly, and we're seeing it pick up at a substantial rate. I'd like to say we were genius marketing people, but it's not that. People are starting to see the problem. When you're forced to adopt a new tool, you do, and not a minute before. Warren Savage: I disagree, to some extent. People have been talking about automated IP assembly for 15 years, and I've never seen it adopted. People ask about IP-XACT, but I've never had a customer ask me for IP-XACT models. My theory is that the demand for sub-systems is growing and the complexity of sub-systems is growing faster than the EDA companies' ability to deliver a tool. These sub-systems involve a lot of different pieces that are not very standardised, not easily expressed with the IP-XACT format, plus you have the software component. The complexity curve on the sub-systems is growing faster than EDA can anticipate what they're going to look like. Chris Rowen: The process of integrating these blocks is so messy and awkward that many users cannot visualise a standard methodology or set of tools that could do it. They think they need to do it themselves. A lot of people think that their problem is unique, and they're not willing to compromise their solution in order to move towards standards. The severity of the problem is tough enough that it is inevitable that they will eventually put themselves in a position where they can compromise. I think the first step is getting people to adopt system-level modelling in a form that allows them to do both architectural exploration and find out about the traffic flow in their chip. Then that segues into becoming the verification structure in which the individual sub-systems can be buttoned up, and becomes the whole chip testbench in which you can do the last round of full verification at whatever level of detail you need. You need the ability to migrate and transmogrify those models to be the universal test bench for everything—for energy verification and signal integrity and logical correctness and software validation and all the other must-haves that constitute full-chip integration. Brian Bailey: We have mentioned a couple of times the benefits that would come from a top-down flow. If we take power as an example, where is power specified? We would hope, at the high level, and then it pushes down as requirements into the blocks. But IP is part of a bottom-up process. How do they meet in the middle? Chris Rowen: The old-fashioned way of doing worst-case power analysis is dead. The old-fashioned way is that you got a power specification for each of the building blocks, you found out what its worst-case power was, you added them all together, and that was your worst-case power. If you did that, you would conclude that no phone could ever work. That's not the way that power analysis is done. You have to look at 10 or 100 different scenarios that constitute the level of activity in each of the sub-systems that makes up a plausible use case. This whole process of getting from worst-case analysis, which is bottom-up, to scenario analysis, which is inherently top-down, is an essential part of that transition. John Koeter: I think you will see software-driven power intent at the TLM level. I think that's inevitable. You won't be able to get absolute power, but you'll be able to get relative power. What you have to do is implement the various power settings that are within the standards. But then you can do things like have dual-rail architectures, you can put in voltage domains, you can do all that stuff to lower your overall power. Chris Rowen: You have to characterise power when in state 0, 1, 2, 3, 4, and you have to characterise power across different workloads. In a power-sensitive processor design, the active power between the most intensive and least intensive sequence can be as much as an order of magnitude, and therefore it isn't meaningful to say the active power is X on this processor. Warren Savage: It's a modelling issue that is the frontier here. How, at the system level, can these things be modelled? IP providers are going to optimise their particular IP around whatever techniques or standards are available. At the system level it will increasingly involve the software guys.   Brian Bailey EE Times
  • 热度 25
    2012-10-19 16:06
    1996 次阅读|
    0 个评论
    Balloons, they seem like a simple product. I can't imagine anything being difficult about sealing a couple of layers of plastic film together. Well, that's what I use to think before I was asked to build a machine to make foil balloons. Over the last 17 years, I have been involved with the design, manufacture, and installation of these machines. They weigh in at about 10,000 pound and are 30 feet long. A large percentage has gone to customers outside the United States. Each installation comes with its own challenges. When they are international, those challenges can be significant. A few years ago on an installation in Central Mexico, I ran into a problem I'd never encountered before and at the time struggled with what could be causing it. This was the first sale to this customer and my first visit to their facility. Things weren't going as planned, the machine wasn't running. The customer was asking why, and I couldn't explain why or what I was going to do about it. These machines have a servomotor to index product underneath a number of seal heads. Each index has a controlled stop using optical registration of the film to position the pre-printed images under the seal heads. I specified the components, wrote the software and had done this many times before. I thought I knew the equipment inside and out and yet randomly the machine had a hiccup with the motion. Sometimes, the index speed would be off, going too fast or too slow. Other times the index length was wrong. Every hiccup seemed to be different without pattern. Every time this would happen, the very next index seemed to be fine. It might happen every 5 minutes or only once an hour. I spent a day monitoring the in-coming power, looking for power anomalies to explain the malfunction. I went through every electrical connection of the machine thinking somewhere I would find a loose wire that could account for the random nature of this problem. I contacted the drive manufacturer, hoping they would tell me about some firmware revision that I wasn't aware of and point me in a direction. I found nothing and at this point I had a machine that didn't work, limited resources and the customer breathing down my neck. It was getting towards the end of the third day and still no real clue with what could be wrong. I had the display (Operator Interface) disconnected from the servo to use the serial port for my laptop and had been monitoring the drive most of the day. It was then that it dawned on me that the machine had run without incident all day with the display unplugged. I plugged the display back in and sure enough about 5 or 10 minutes of running, and there it was. I left the display unplugged for the next shift as a test and the machine preformed perfectly without exception. Ok, so now I had a clue where to look. Like most I instantly thought; I have a noise problem. This particular display has two serial ports. The second port was talking to a PLC, and that didn't seem to have any problems, but that is far from a definitive conclusion. I didn't have equipment with me to monitor the data stream. So I really couldn't prove or disprove that noise was there or a problem. My experience with industrially hardened components is that if you follow some basic "best practices," noise is seldom the issue. I ultimately decided I would have to dig deeper, and got the customer to agree to run the machine as is (display disconnected) until I could find a solution to the problem. After returning home, I made a number of calls to both the display and servo drive manufacturers. In conversation with one of the drive firmware engineers, I was asked the interval between data transfers from the display to the drive. I had to check and according to the data sheet, every 500ms. After discussing the amount of data I was transmitting, he told me that was probably too often. It was concluded I was overflowing the serial buffer. I had used this display / drive combination before, but previously it was just for data display. In hindsight, earlier installations probably had the same problem, but they were easy to overlook when the machine performance wasn't affected. After thinking about it further, it all made sense. The machine checks every cycle for motion parameter changes and recalculates the motion profile based on those values. When an overflow occurred, data was lost and the register being written to only received part of the data. Then the program would use that corrupted data to calculate the next move. There's little wonder why the problem manifested itself in such a random manor. The fix ended up simple; reduce the polling frequency of the display. I have used 10 or 15 different brands of servos over the years. I have used even more brands of PLCs and operator interface units. For the most part, I have had little problems integrating various brands of components, but when you run into compatibility problems seldom will you find the answer in the supplied manuals, you'll most likely have to dig deeper. - Mike Frazier Mike Frazier is vice president of Axis Automation in Hartland, Wisonsin, United States. For most of the last 30 years, Mike has been designing and building automation equipment for numerous industries. In the beginning of his automation career, he worked as a machinist, serving an Apprenticeship at Centerline Industries in Waterloo, Wisonsin. In those early years, Mike took an interest in the control side of his projects. Since that time, he has developed controls systems based on PLCs, Motion controllers, PCs, and embedded controllers as well as combinations of those platforms.
相关资源
  • 所需E币: 1
    时间: 2022-7-23 15:59
    大小: 116.06KB
    上传者: Argent
    FactoryTalkVantagePointintegrationintoFactory
  • 所需E币: 0
    时间: 2020-11-18 19:25
    大小: 4.39MB
    上传者: xgp416
    3DintegrationNoC资源大小:4.39MB[摘要]现场可编程门阵列(FPGAs)的详细路由是一个新的和困难的问题,因为可用于路由的布线段只能以有限的方式连接在一起。
  • 所需E币: 4
    时间: 2019-12-25 21:18
    大小: 3.76MB
    上传者: 二不过三
    TXPMC/XMC-TransmitterforCommunicationssupportingCDMA,WCDMA,GSMandothers……
  • 所需E币: 5
    时间: 2020-1-14 09:37
    大小: 1.64MB
    上传者: rdg1993
    OnNanoscaleIntegrationandGigascaleComplexityinthePostOnNanoscaleIntegrationandGigascaleComplexityinthePost.comworldHugoDeManK.U.Leuven/IMECBelgiumdeman@imec.beOnNanoscaleIntegrationandGigascaleComplexityinthePost.comworldHugoDeManK.U.Leuven/IMECBelgiumdeman@imec.beOutlinePost.comandPost-PC?Designchallenges?ImpactonresearchandeducationAdequatelyskilledengineers?ConclusionsH.DeMan/IMECDATE023Post-PC:3rdgenerationofcomputing...1000Mainframe0.01Dataprocessing10PC1DisruptiveTechnology1chips/thingSmartThings>100#/human+DSP(MM)+NetworkingAmbientIntelligence6070809000104H.DeMan/IMECDATE02AmbientIntelligence[Weiser]1.WirelessWAN-LANnetworkdeliversinfotainment,communication,navigation...anyplace,anytime,forevery……
  • 所需E币: 4
    时间: 2020-1-15 09:51
    大小: 354.43KB
    上传者: rdg1993
    signal_integrationwww.eetchina.com信号完整性中的反射问题原理作者:靳冶2008年06月摘要:信号完整性,即信号在信号线上的质量,指的是因数字信号的模拟特性而产生的任何影响信号传输的现象。在如此高的传输速率下,由信号走线的细微疏忽而产生的延时、接口等问题不仅在一条线上产生影响,还会将串扰加在邻近信号线甚至邻进的电路板上,严重时将使信号传输发生紊乱,使得整个系统不能正常工作。随着高速信号的迅速发展,信号完整性的重要性日益突出出来。影响信号完整性的因素通常被归纳为反射、振荡和环绕振荡、地电平面反弹噪声、串扰四个方面。本文选对其中的“反射”问题,分别从“场”和“路”的角度对其产生原理进行简要分析。关键字:信号完整性反射均匀传输线方程反射系数阻抗匹配均匀平面波www.eetchina.com目录前言…………………………………………………………第一章电路角度的分析………………………………………