tag 标签: analysis

相关博文
  • 热度 15
    2015-8-21 18:20
    1730 次阅读|
    0 个评论
    As a technology journalist, I get the equivalent of a postgraduate education reading good, detailed papers and reports, though some are mind-numbingly difficult. Knowing my interest in such papers, several software developers and tool vendors have independently referred me to "Quantitative Evaluation of Static Analysis Tools," by Shin’ichi Shirashi, Veena Mohan and Hemalatha Marimuthu, at the Toyota InfoTechnology Center in Mountain View, Ca.   In this paper, the authors describe the task of selecting the optimum code analysis tool for doing run time code analysis of software for use in Toyota's vehicles. Starting with tools from about 170 vendors of proprietary tools as well as a range of free and open source versions, they narrowed their choices down to six, those from Coverity, GrammaTech, PRQA, MathWorks, Monoidics and Klocworks to make their selections they used a complex methodology to first make their assessment and from that also derive a set of coding guidelines to help the company's development teams avoid defects proactively.   Readers of the report may disagree with the types of tests and metrics the Toyota team used to make its choices. Some developers I have talked to complain that despite the data-driven quantitative approach, at the beginning of their efforts the team made use of the qualitative and subjective judgements of a few experts they trusted. However given the number of such tools they had to evaluate, that was a choice they were almost forced to make. Even after limiting their choices in that way, there were many alternatives to evaluate and test, even after they narrowed their choices further by excluding noncommercial tools that provided no technical support, those that did not support safety critical applications, and including only those that supported the C language.   The paper describes how the Toyota team first created a set of test suites incorporating a wide variety of software defects that might cause problems in safety critical applications. They then tested the tools they had selected against that suite. Finally, because of the importance of driving buggy software out of the automobile environment, they went one step further: they used the data already collected to create several new metrics to further hone down the performance of the tools. For information on the various tests they used, they depended on several reports from the U.S. National Institute of Standards and Technology (NIST), supplemented with information from various tool vendors and users.   The choices they made and the process they came up with made it clear how many things can go wrong in a software design, especially a safety critical one, and how hard it is to pin things down. Drawing on every piece of literature they could find, they identified eight defect types that were important, including: static and dynamic memory, resource management, pointer-related, and concurrency defects as well as the use of inappropriate and dead code. From that they identified 39 defect sub-types, and from those they created 841 variations that they used in their tests. The methodology is about as comprehensive as any I have ever seen.     In addition to the defect code bases they created for their test suites, the researchers created an identical set without defects. The first code base was for evaluating true positives from the analysis results, while the second set without defects was used for evaluating false positives. The researchers also tried to keep the test suites as simple as possible while keeping in mind the nature of the automotive software environment, especially in relation to the use of global variables and stack memory. (The test suite benchmarks are available on Github .)   On average, the six static analysis tool finalists were correct in their detection of flaws in the code about 34 to 65 percent of the time, with Grammatech's Code Sonar ranking first. But a more important measure is how they ranked on various kinds of other tests. For example, on static memory defects, Gramatech's Code Sonar and the MathWorks Polyspace ranked highest. On pointer-related defects, PRQA ranked highest. On detecting concurrency defects, Code Sonar came out on top, while on numerical defects, Mathwork's tool did best. The report goes into much more detail, of course. Dry and matter-of-fact as it is, though, their report is worth going through slowly line by line because of its value to developers concerned about writing reliable, safe code.   The paper is a gold mine of information that I will refer to over and over again, not only for insight into which tools are best, but for the nature of the defects being hunted and the impact they have on overall software reliability. It is also valuable if you are interested in learning how to develop a methodology for assessing the complex trade-offs that must be made. Perhaps the most valuable lesson to be learned from this paper is the clear inherent message that if you are serious about detecting software defects, more than one tool is necessary, no matter how much the design team manager or the company accountant complains about the expense.
  • 热度 14
    2013-9-3 16:04
    889 次阅读|
    0 个评论
  • 热度 29
    2013-9-1 23:43
    1265 次阅读|
    0 个评论
              Analysis for the above product list as below: 1. Package: WBFBP-02L,DFN1006,DFN1006DN,SOD-923,DFN1006T3 2. Solder size: the same as package 0402, except DFN1006T3 (for this item, 0402 solder size should divided into two roads and one piece can be used as two pieces) 3. Voltage range: 3.3V,5V,6V,7V,8V,12V,15V,24V,36V 4. IEC61000-4-2 (ESD) ±30kV (air), ±30kV (contact), Polymer ESD just reach IEC61000-4-2 (15KV air, 8KV contact discharge) 5. Capacitance: can be low to 0.35pf, two way 6. Power: UP to 250W, package 0402, power:250w, Only leiditech has this package technology in the market. 7. Surge protection ability: up to 10A 8. Clamping Voltage: close to VB, better than Polymer ESD series. 9. Single road and two-road for options 10. Applications : data transmission port, including RS-485,RS-232, CANBUS, I/O,HDMI and so on.
  • 热度 23
    2013-7-21 09:51
    3014 次阅读|
    0 个评论
    I have recently finished reading the book Constraining Designs for Synthesis and Timing Analysis: A Practical Guide to Synopsys Design Constraints (SDC) . It is written by Sridhar Gangadharan of Atrenta and Sanjay Churiwala of Xilinx. They also had help from Frederic Revenu, who wrote a chapter on the Xilinx extensions to SDC. Before I go any further I have to admit that I have never written a design constraint in my life, so I came at this book as someone who understood the problem but had never been involved with solving it. Back when I was doing design, we hardly even used a simulator let alone half of the tools that are available today. The book is very well structured and reads easily. Each chapter takes on a subject and develops it well. Even though I have never written constraints, I was able to follow along at each step, and the small examples they provided were helpful to me in understanding the lessons. The book is also completely tool-generic. The authors make no attempt to sell you on any particular tool, and you could read the book and not know that it came from an EDA company. If I have one complaint about this book: A chapter is missing. At several points in the book, the authors discuss how certain constraints are estimated or that information is not available until a certain point in the flow. They also explain how static timing can be useful at certain points in the flow for ascertaining different types of timing checks. While I understand that every company uses different tools and flows, I would have liked to see a chapter that explained the writing and evolution of constraints during the design process. What things should companies focus on at certain points in the flow? When should the constraints be updated? When should static timing be run? What useful information will it provide? This could have also included a larger, more typical example, and while I understand that the constraints files can become very large, it would have been helpful to look at something a designer may face. The actual constraints files could have been made available online and small parts of it described in the chapter. These comments, however, do not detract from the book. I highly recommend this book to anyone who needs to get acquainted with timing constraints. I feel that I could start writing them myself after reading this book. A content list is provided below: 1. Introduction 2. Synthesis Basics 3. Timing Analysis and Constraints 4. SDC Extensions through TCL 5. Clocks 6. Generated Clocks 7. Clock Groups 8. Other Clock Characteristics 9. Port Delays 10. Completing Port Constraints 11. False Paths 12. Multi-Cycle Paths 13. Combinational Paths 14. Modal Analysis 15. Managing your Constraints 16. Miscellaneous SDC Commands 17. XDC: Xilinx Extensions to SDC At 253 pages, the book lists at $119. More information can be found on the Springer site and the book is available slightly cheaper on Amazon . I would love to hear other people's opinions of this book, especially from those who have written design constraints in the past. Brian Bailey EE Times  
  • 热度 23
    2012-9-13 11:11
    1719 次阅读|
    0 个评论
    As a "newbie" electrical engineer at an aerospace design company in Louisville, Colo., I have been learning the flight hardware design techniques, testing procedures and the rules that go along with troubleshooting a problem. It usually involves some extreme amount of paperwork, three signatures and highly skilled technicians trying to read through a rework or troubleshoot steps that somehow always come under scrutiny, thus making me feel like ... well.... a newbie. In the past 14 years of my career, I served as a communications repair technician in the U.S. Marine Corps, and, after I was honourably discharged, I worked at other small companies as an experienced technician using the skills I had gained in the Marine Corps. During this time, I worked part-time and used my GI bill to obtain my EE degree from the University of Nevada, Reno. While earning my degree, I spent many hours working in the aerospace field of study. Teaching old dogs old tricks When I moved on to the avionics group with my current employer, I was a semi-expert on sensors for micro satellites. I was proud to find my niche and felt like I could show a bit more of my knowledge without getting myself in a bind or be made fun of by my peers. And, until a few weeks ago, I never really had the right moment to "show them what I got!" While we were working a failure on a circuit board, management wanted to conduct a destructive analysis to determine a root cause for the failure in the flight board...doing this would take time and cost a bit more than our group wanted to spend. And, at the time, the board and component failure had become the "long pole" (the biggest issue of concern) in our program and was given the highest priority. I quietly said to my boss, "Why don't we 'Huntron track it?'" Based on a name-brand piece of test equipment called a Huntron Tracker, this was a term we had used a lot in my days at a fourth-echelon electronic repair facility in the Marines. Of course, my boss didn't know about any of this, so to him I must have sounded like a "goof." And, my "whisper" was a little louder than I had planned and resulted in everyone at the table looking up at me in total confusion. I said quickly, "You guys know what I am talking about right? Using a curve tracer to compare a component to a known good component to look for a difference in the 'signature'.." (Insert sound of crickets here). Still nothing was changing their looks, so I started over with a detailed account of how I used this test equipment while a tech and how it saved us many hours trying to troubleshoot down to the root cause of the failure. Once they all were on the same page, they pointed to me to conduct the test and prepare the procedure, and since they knew no one person would have a clue what I was describing, it would be up to me to conduct the test first-hand. For a split seconded I realised that all my years of training and experience hadn't been for not, and that I would now be able to call on my "bag o' tricks" without having to feel like I was talking nonsense in the aerospace world. Solder issues I troubleshot the circuit to a compare-and-hold IC that had inductive shorts to the 5V reference on the wrong pins—in fact it was TWO pins that had been damaged or shorted—looking closer, I realised that, during the QA inspection, the true problem was missed. On a metal can-integrated circuit, the soldering had bubbled up to the can and thus created a semi-inductive short once the board and IC were operating in a hot environment. It was so hard to see and required that the board be tilted and reviewed under a microscope, but I was able to show that the quality of the solder on this component had caused the failure. A review and circuit analysis was also conducted, and it showed that the type of intermittent problem could be traced back to this component. (Figure 1, below, the IC with shorted pins). Figure 1: Solder Blobs! Yikes! After I gave my full report and provided my findings, I felt like I had really come through for my team. I had several people give me a pat on the back, which motivated me to continue speaking from my experiences and show that, just because I was new to the company, I still had some area of expertise that no one else on the team had... and I felt proud to help on a problem that was given the highest priority to solve. - William Davison William Davison recently moved from Northern Nevada to the Denver area to work in the aerospace industry after obtaining his electrical engineering degree from University of Nevada, Reno. He keeps busy with his hobbies (old BMW car restorations, LEGO Robotics and Halloween effects/costumes) and continuing his education. He is currently going to Colorado State University with a focus on obtaining his Systems Engineering Masters.
相关资源