tag 标签: safety-critical

相关博文
  • 热度 22
    2013-9-19 23:02
    1934 次阅读|
    0 个评论
    As the microprocessors embedded developers become more powerful and the software they design and write gets more complex and increases in size, they are faced with not just increasing opportunities, but also with the need to make the code as safe as is possible to use. This is first being driven by such traditional application segments such as military-aerospace, automotive and medical, which all have strict software certification requirements testing standards in place. The second driver is the uncertain nature of the interactions between these segments with the totally unregulated nature of much of the current consumer and mobile market. And as far as newly burgeoning embedded consumer apps opening up in home electronics, safety does not seem to even be on the radar map. However, according to Mark Pitchford and Bill St. Clair, LDRA, authors of this two-part series , an ever-increasing reliance on software control and the nature of the applications has led many companies to undertake safety-related analysis and testing. " Consider the antilock braking and traction control of today's automobile," they write. "These safety systems are each managed by software running on a networked set of processors; the failure of any of these systems sparks major safety concerns that could lead to recalls and lawsuits. "Companies concerned about safety in their products are looking outside their own market sector for best practice approaches, techniques, and standards. Examples of such industry crossover have been seen in the automotive and avionics industries with the adoption of elements of the DO-178C standard by automotive and a similar adoption of the MISRA standards by avionics." Such crossovers are likely to become more common, placing additional challenges before not only individual developers, but the certification standards groups in various industries as well. In automobile environment, for example, increasing computerisation of not only engine and power train electronics but the infotainment systems, and interactions between them means software safety will be an on-going challenge. And when you add in such things as vehicle-to-vehicle wireless networks, it seems to me that automotive developers have a real can of worms to deal with as far as safety is concerned. In medical designs, for another example, there is an ageing population in many countries where there is considerable effort underway to adapt software and devices originally designed for consumer environments to health and medical applications, forcing standards groups – and the FDA—to go back to the drawing board to come up with specifications and certification requirements that deal with such issues. Fortunately, there are a wide range of tools, RTOS alternatives and requirements and specification testing procedures available. However, despite this range of tools, capabilities and industry specifications to guide developers, there remain considerable uncertainties ahead as embedded systems become more connected and the Internet of Things phenomenon takes hold. Already the auto industry is looking for ways to deal with the intrusion of a variety of mobile smartphones and mobile computing platforms into the automobile and the security issues that raises. Added to that are the moves towards vehicle to vehicle wireless connectivity. Unless there are strict dividing lines between an automobile's entertainment systems, the information systems on which the driver relies and the body/engine electronics, the ultimate safety of any of these sub-systems remains open. In medical applications, the impact of network security on device safety is already raising concerns. Users of medical devices ( such as yours-truly ) who place their lives in the hands of the safety standards the FDA imposes, are continually bombarded on a month by month basis with offers of the newest smartphone app and wireless device add-ons to complement the operation of their medical devices. And the rate at which they are being offered tells me that many of them have not gone though any sort of FDA approval, which traditionally can take years. The dividing line between the methods for developing software for use in a secure environment and for use in a safety-critical design is razor thin. And it is apt to get thinner and more ambiguous unless safety standards organisations also start taking the impact of connectivity and security in how they evaluate safety. To my mind at least, an insecure network-connected device or application by definition is not safe: if you do.  
  • 热度 14
    2012-11-6 19:23
    1382 次阅读|
    0 个评论
    Oddur Benediktsson's paper titled " Safety Critical Software and Development Productivity " is over a decade old, but in my opinion, it has some stunning results. We all know that building safety-critical software is hugely expensive. When the code must be utterly reliable the effort expended in getting it right skyrockets. That's why avionics and similar products are so expensive. Except that conventional wisdom may not be entirely true. In the paper Benediktsson looks at IEC 61508, which is a widely-used functional safety standard. It is commonly used ( or is sometimes is the father of similar standards ) in the automotive, nuclear, rail and other industries. It defines four "safety integrity levels" (SILs) with consequences as follows: SIL1 : Minor injuries at worst SIL2: Major injuries to one or more persons SIL3: Loss of a single life SIL4: Multiple loss of life The standard lists practices that are recommended, or highly recommended, for each SIL. The use of coding standards, for instance, is highly recommended for every SIL. Formal methods are recommended for SIL2 and 3, and highly recommended for SIL4. The author relates the effort needed—derived using the Constructive Cost Model (COCOMO) method—to the difficulty of verifying correctness at various safety levels. Unsurprisingly that goes up by rather a lot as one goes from the lower SILs to higher ones. SIL3 is about 1.7 times more effort than a nominal, non-safety-critical product. He goes on to relate productivity to process maturity, using the Capability Maturity Model as an example model. At CMM4 he finds schedules are almost half those for nominal (NOM) products. Tying it all together results in this table below:   In other words, as one progresses to higher levels of process maturity (the CI values—NOM is CMM1, CI3 is CMM4 ) the effort to build SIL3 systems is no greater than that needed for non-safety-critical systems. That's a pretty stunning result. Or, a dysfunctional company can go to high levels of process maturity and get the same crap for half the price.
  • 热度 16
    2012-11-6 19:23
    1900 次阅读|
    1 个评论
    " Safety Critical Software and Development Productivity " by Oddur Benediktsson is over a decade old, but in my opinion, this paper has some stunning results. We all know that building safety-critical software is hugely expensive. When the code must be utterly reliable the effort expended in getting it right skyrockets. That's why avionics and similar products are so expensive. Except that conventional wisdom may not be entirely true. In the paper Benediktsson looks at IEC 61508, which is a widely-used functional safety standard. It is commonly used ( or is sometimes is the father of similar standards ) in the automotive, nuclear, rail and other industries. It defines four "safety integrity levels" (SILs) with consequences as follows: SIL1 : Minor injuries at worst SIL2: Major injuries to one or more persons SIL3: Loss of a single life SIL4: Multiple loss of life The standard lists practices that are recommended, or highly recommended, for each SIL. The use of coding standards, for instance, is highly recommended for every SIL. Formal methods are recommended for SIL2 and 3, and highly recommended for SIL4. The author relates the effort needed—derived using the Constructive Cost Model (COCOMO) method—to the difficulty of verifying correctness at various safety levels. Unsurprisingly that goes up by rather a lot as one goes from the lower SILs to higher ones. SIL3 is about 1.7 times more effort than a nominal, non-safety-critical product. He goes on to relate productivity to process maturity, using the Capability Maturity Model as an example model. At CMM4 he finds schedules are almost half those for nominal (NOM) products. Tying it all together results in this table below:   In other words, as one progresses to higher levels of process maturity (the CI values—NOM is CMM1, CI3 is CMM4 ) the effort to build SIL3 systems is no greater than that needed for non-safety-critical systems. That's a pretty stunning result. Or, a dysfunctional company can go to high levels of process maturity and get the same crap for half the price.  
  • 热度 18
    2012-6-8 10:01
    1713 次阅读|
    0 个评论
    We toss the phrase "safety-critical system" around without reflecting much on its meaning. What does "safe" mean? Can you prove your system is safe? I doubt it, since that's rather analogous to proving the absence of bugs. There's really an epistemological problem with the notion of safety, since one can only create arguments for risks one understands, not the entire universe of possible risks. Does it even matter if a "system" is safe? A system – a black box, instrument, device or other stand-alone device might be "safe," but could be a disaster in practice. That system is undoubtedly just one component in a bigger product, and its interaction with the rest of the world may not be safe. The rest of the world includes people, and people are notoriously competent at injecting an idiot factor that defies most safety reasoning. A case in point: A couple of weeks ago I was on a long-haul flight and was pleased that the seat had a 110 VAC outlet to power the laptop. A 14 hour hop is about three times longer than my laptop battery lasts. But early in the flight I was engrossed in Jean Smith's new Eisenhower biography and sort of oblivious to my surroundings. Eventually looking up I noticed that my seatmate, a rather elderly Chinese woman, had her earbuds on and was trying to insert the 1/8" connector... into the power outlet!! ( see below )   The 110 VAC outlet is next to the screen A safety case for the power outlet would probably figure on low-amp fuses, proper grounding, and other parameters. But who would factor in "elderly" and "earbud"? Even more confounding, the outlet was of the North American three-prong configuration, which was possibly foreign to this Chinese national. ( I once knew a Thai woman who had grown up in a bamboo shack with no electricity – it's probably dangerous to assume any familiarity with technology when catering to the general public ). Was she an idiot? Of course not. On reflection, it's sort of logical to expect the audio socket to be near the screen instead of hidden on the armrest.  
  • 热度 20
    2011-3-25 15:45
    1671 次阅读|
    0 个评论
    Are we engineers? In some states in the US that title is restricted to those who pass professional competency tests.   The New Oxford American Dictionary (2 nd edition, 2005) defines the noun "engineer" thusly: A person who designs, builds or maintains engines, machines, or public works.   The Shorter Oxford English Dictionary (6 th edition) provides a number of definitions; the one most closely suited to our work is: An author or designer of something; a plotter.   Neither really addresses what we do, though I have to admit liking "a plotter" as it paints us with an air of mystery and fiendishness.   In a provocative piece in the March 2009 issue of the Communications of the ACM ( "Is Software Engineering Really Engineering?" by Peter Denning and Richard Riehle) the authors wonder if software development is indeed an engineering discipline. They contend that other branches of engineering use standardised tools and metrics to produce systems with predictable outcomes.   They state about software: " consistently delivers reliable, dependable, and affordable software systems. Approximately one-third of software projects fail to deliver anything, and another third deliver something workable but not satisfactory. "   That huge failure rate is indeed sobering, but no other discipline produces systems of such enormous complexity. And we start from nothing. A million lines of code represents a staggering number of decisions, paths, and interactions. Mechanical and electrical engineers have big catalogs of standard parts they recycle into their creations. But our customers demand unique products that often cannot be created using any significant number of standard components.   At least, that's the story we use.   I hear constantly from developers that they cannot use a particular software component (comm stack, RTOS, whatever) because, well, how can they trust it? One wonders if 18 th century engineers used the same argument to justify fabricating their own bolts and other components. It wasn't until the idea of interchangeable parts arrived that engineering evolved into a profession where one designer could benefit from another's work.   Today, mechanical engineers buy trusted components. A particular type of bolt has been tested to some standard of hardness, strength, durability and other factors.   In safety-critical applications those standard components are subjected to more strenuous requirements. A bolt may be accompanied by a stack of paperwork testifying to its provenance and suitability for the job at hand. Engineers use standardised ways to validate parts. The issue of "trust" is mostly removed.   Software engineering does have a certain amount of validated code. For instance, Micrium, Green Hills and others offer products certified or at least certifiable to various standards. Like the bolts used in avionics, these certifications don't guarantee perfection, but they do remove an enormous amount of uncertainty.   I'm sure it's hugely expensive to validate a bolt. "Man-rated" components in the space program are notoriously costly. And the expense of validating software components is also astronomical. But one wonders if such a process is a necessary next step in transforming software from art to engineering.