tag 标签: LDRA

相关博文
  • 热度 22
    2015-9-18 18:19
    2173 次阅读|
    0 个评论
    In courses on fiction and literature in college, my teachers emphasized the difference between implicit and explicit meanings and the importance of understanding the difference in order to derive the full meaning from what I was reading. In technical documentation about specifications and standards, the distinction between the two terms is even more important. Mismatches between the engineering reader’s level of understanding and that of the expert writing the standard complicates the problem.   If the writer assumes too much about the reader’s level of expertise, s/he will get into the habit of communicating in technical shorthand, risking a mismatch in expertise level. I have found that is often the case in such highly technical and nuanced areas such as specifications for high reliability and safety standards such as DO-178. What was implicitly assumed in version DO-178B has become explicit in the DO-178C.   Except for only the rarest of use case developers, in developing software for traditional single cores that had to meet the older DO-178B it was safest and fastest to do certification according to what was explicitly required by the standard. In the transition to multicore designs things got much more complicated, and that assumption began to break down. In such an environment the program has to be broken down into well-defined modular functional units with interfaces that are as unambiguous as possible. Implicit in this methodology is the need to test all of the software modules carefully to make sure they all come together and interact properly.   One place where depending only on DO-178B’s explicit requirements has caused a lot of problems is in relation to control and data coupling, which are particularly problematic in multicore designs. Control coupling is where one software module sends data to another module in order to influence or direct its behavior. Data coupling is where one software module simply sends information to another one but does not place any requirement on the receiving module, nor expect some sort of return action.   Many multicore developers started having problems with getting their designs through the certification process. For example, they did not realize that they had to demonstrate that the software modules and their couplings interacted only in the ways specified in their original design – not explicit in DO-178B, but implicit. For the same reason, even though they followed the explicit rules in 178B, they were unable to demonstrate that unplanned anomalous or erroneous actions were not possible.   That confusion has been cleared up in the newer DO-178C. Designed as the standard for software used in civil airborne systems, now explicitly requires that control and data coupling assessments be performed on safety-critical software to ensure design, integration, and test objectives are met. DO-178C requires increased and more rigorous testing at each level of the standard's qualification process.   This requires developers to carefully measure control and data coupling using a detailed combination of control and data flow analysis. This is a difficult enough process in single core designs, but borders on impossible in some mil-aero multicore designs and requires a new approach to doing that analysis.   One of the best tools to do this tough job is the newest Version 9.5 of the LDRA Tool Suite with its new improved "Uniview." This is a sophisticated graphical tool for observing all of the software components and artifacts in a multicore design and providing requirements traceability relating to system interdependencies and behavior.   By means of the LDRA Uniview call graphs, the hierarchy of a system can be observed graphically, allowing direct tracking of the behavior of all the various nodes and their dependencies. (Source: LDRA)   I am a sucker for graphical approaches to solving almost any problem. Even in high school, instead of using the traditional mathematical techniques for solving math and physics problems, I tried to find a way to come up with a graphical representation of the problem. I found that I not only could get the right answers faster, but I came away with a better understanding of the nature of the problem I was dealing with.   I find that is also true with LDRA's Version 9.5. The improved Uniview capabilities include not only traditional code coverage but the tracking of data and control coupling. On the control coupling side, it allows a developer to perform flow analysis on both a program's calling hierarchy as well as individual procedures and see the results instantly, showing which control functions are invoked by others and how.   On the data flow analysis side, I am impressed by the way it follows variables through the source code, performing checks at both the procedure level and at the system-level. A developer gets an ongoing report on any anomalous behavior, an important part of a data coupling assessment. The graphical framework makes it much easier to see data dependencies between modules, considerably speeding up the verification of all data and control coupling paths.   This capability may be a big plus in complex and demanding heterogeneous multicore designs, beyond safety-critical ones such as DO-178. In such applications there are numerous cores, and their software module interactions are increasingly complex. Even in the current generation of mobile devices there is a mix of five or six to a dozen processors of various types: general purpose CPUs, graphics processing units, and digital signal processors. And the number and diversity is continuing to climb.   In such an environment, structured design using software modules with clear and unambiguous data and control coupling will be forced on developers. To get a design that simply works, let alone meet an imposed standard, developers will have to structure their software code into clearly defined functional modules that can operate as a cohesive system across all processors in the system. However, when you go modular, even in non-safety-critical designs it will be necessary to examine closely the way the software blocks come together and interact. And to do that effectively a graphical approach such as the one incorporated into LDRA's tools may be the only way to go.
  • 热度 14
    2012-10-15 19:20
    1785 次阅读|
    0 个评论
    Few people care to measure anything in the firmware world, which is a shame. For this to be engineering, we need to apply known principles, measure the results, and use that data to improve our systems. For instance, is there a metric that tells us anything about how many tests we should do on a particular function? There are several answers that give varying levels of precision. To validate code to the highest safety critical standards (like DO-178C level A) one how to prove that every possible condition and decision has been tested. That's expensive, and few working with less important systems have the time. But there are other, less onerous, metrics that are useful, if imperfect. For instance, Tom McCabe invented the notion of cyclomatic complexity. That's an integer that represents the complexity of a function (not of the program; it's measured on a per-function basis). It's a simple notion: cyclomatic complexity (for some reason always expressed as v(G)) is the number of paths through a function. A routine with 20 assignment statements has a complexity of 1. Add a simple if statement and it increments to two. A more formal definition uses the concept of "nodes" and "edges." A node is one direction the code can go. An edge is the connection between nodes. V(G) is: v(G) = edges – nodes + 2 It's easier to visualize this graphically as shown below. Each of the circles in the following diagram is a node; the arrows connecting them are edges:   Running the math (or tracing the paths through the code) and we see that v(G) is 3. The complexity is the minimum number of tests one must run in order to insure a function is fully tested. In this example, two tests are provably insufficient. In other words, cyclomatic complexity is a metric that gives us a lower bound on the efficacy of the test regime. It says nothing about the maximum number needed, which may be a lot higher for complex conditions. But at least it gives us a bound, a sanity check to insure our tests don't fall below the minimum needed. One way to address this is to make a table of all possible paths, as shown below which for this example looks like (the numbers are the nodes from the previous diagram that the code flows through):   The numbers are the nodes from the previous diagram that the code flows through Build the table and compare it against v(G); if the number of paths identified in the table isn't at least v(G) than the table is incomplete. This is a beautiful thing; it's a mathematical model of our testing. A lot of tools will automatically do this for you; some will even create the edge/node diagram and, based on that, emit the C code needed to do the testing (Examples include those from LDRA and Parasoft). Isn't it amazing that, given that it's so hard – and expensive—to get testing right, that so few developers use these tools? Sure, the tools cost money. So let's do the math. Suppose a system has 100 KLOC. If the developers follow the rules and limit functions to a page of code in length – let's say 50 lines – that's at least 2,000 functions. Most pundits recommend limiting the complexity of a function to somewhere between 10 and 15. To be conservative we'll assume the average complexity is 5. That means we need at least 10,000 tests. In the US the loaded cost of an engineer is $150k—$200k/year. Let's assume the former. That means the engineer costs his company $75/hour or $1.25/minute. If we have a Spock-like programmer who can create the diagram above, parse it, figure out a test, code it, get it to compile correctly, link, load and run the test in just one minute, and sustain that effort for 10,000 minutes (166 hours, or a work-month) without even a bathroom break, those tests cost $12.5K. Substitute in real numbers and the result becomes horrifying. The tools start to look pretty darn cheap. Alternatively, if you don't use the tools and construct the graphs manually, then you're forced to really look at the code. And in the example above there are at least two bugs that quickly become apparent. So the quality goes up before testing begins. (Of course, there are plenty of tools that would identify those bugs, too).