tag 标签: HCC

相关博文
  • 热度 16
    2015-3-16 20:56
    1767 次阅读|
    0 个评论
    At the Embedded Systems Conference I had a chance to have a conversation with the Dave Hughes and David Brook from HCC Embedded . They have an unusual devotion to getting firmware right. Later, we had a telephone call and explored their approaches more deeply. I ran part 1 of this discussion last week, and here’s the second half. As noted last week, this has been edited for clarity and length. Jack: I keep thinking about the Toyota debacle, where they were slapped with a $1.2 billion fine. The code base is something under a million lines of code, which means they have won the coveted, Most Expensive Software Ever Written Award. The open SSL thing, is interesting, because the group that actually was maintaining it were able to solicit on the order of only a few thousand dollars a year from industry to support it. Dave: It's absurd that there is no better method. I mean, if I was running the world which, unfortunately, I haven't been granted yet, one of my goals would be to restructure how software is developed. Set up committees to say, "Look, we're now going to write a proper SSL, a proper TCP/IP and distribute this in a form that can be reused." And we can do that at relatively small cost. We could get that extremely high quality and we could make it very reusable so that these problems would be dealt with. And we could probably even create a competitive environment to do that in as well, to make it even cheaper, where they compete on, 'We've got fewer bugs than you'. Jack: How do you convince your customers that this software is, indeed, of extremely high quality? Because it's a claim that is easy to make and in fact, a lot of people routinely do. But, with you guys I know it's much more serious. How do you convince folks that this really is quality code? Dave: You can trawl all the websites in the world and you won't find a software company saying that our software is slow, huge, unmanageable, or badly documented. They all say exactly the same thing and the web is a huge leveler in this respect because there are large companies, small companies, two men and his dog companies, all writing exactly the same thing on their web sites. At HCC we feel we can only make our quality argument by creating verifiability. So one of the ways we can verify this is by providing the test reports that show that this actually did achieve full MC/DC coverage. And therefore, if you run that on your platform, you will get exactly the same test results. We also publish documents like quality reports and checks for complete MISRA compliance. We enforce all MISRA rules. We provide a large amount of verification documentation to make people realize that there is provable quality in the products. It is difficult because many feel quality is expensive. But all of the studies show quality is cheaper than doing it the "freestyle" way. Actually getting someone to pay up front for that without having been burned first is a very difficult thing to do. We are marketing using the engineering methods we've established and the verification tools we've built to show that this is actually proven to work in a much better way. David: Just as a slightly tangential comment on the current state of the embedded industry, there are many mature software organizations out there who have their own experience and their own objectives regarding quality and--to a certain extent--that can be a fairly normal, standard, sales process. When we get to see the QA guy, he normally gets quite excited by this quality message. In medical and industrial control companies, in particular, so long as you can find the right person, the sales process is fairly mundane. There is a huge amount of background noise when it comes to advertising online software. It's very difficult to get any message out to the broad-based community of developers just now. The software vendors at the low end of the markets, the M0s and the M3s, make a huge amount of noise and distribute a lot of free and open-source software. It's not that high-quality software can't compete. The major problem is that these guys monopolize the sales and communications channels and it's difficult to get access to those sales and communications channels. There are good companies with good software out there and an excellent value proposition, but often they don't get to talk to a software engineer, because that engineer goes straight into the desired sales process that the semiconductor company wishes that guy to go through. I think that the semiconductor companies are going to have to take a look at how they can create a healthier ecosystem for their own products. Jack: Interesting point. The semi-vendors do make demo software available and we know that most of that stuff is junk or toy code that works under narrow sets of circumstances. Dave: And most of it is actually usually documented by the silicon vendor as saying, "Not suitable for product use." It's for demonstration purposes only, but that's not what's actually happening. Jack: I know you guys use a V model process. Do you have any comments about agile approaches? Dave: Well, from our point of view, the agile processes seem to be something that's developed for people to take shortcuts with development and still claim quality. Because our aim here is to develop software that is scalable and reusable forever, we're not on any short times scale. We're on a mission to make this as rigorous as we can. So we don't really have much interest in the agile methods. Functional safety standards like 61508 have no trouble with something like agile. Jack: Sure, that's fair. I understand that and certainly there's no one process that's perfect for every situation so I see that the process you've chosen makes a ton of sense for what you folks are doing. Dave: Yeah and a lot of this is about touch and feel and establishing something that works for you. There's no right or wrong on these things, it's like having an argument about how you do your brackets in C language. It's completely and utterly arbitrary what result you come up with. The important thing is that you have a result that is well defined and you go through a sensible set of methods for your particular problem. It's all about creating a framework that really ensures that the chance of error is very small. Jack: What about tools? I know you folks are using some of the LDRA tools and they certainly have some very, very interesting offerings. Dave: We use different tools for different pieces of work. LDRA gives us very good static analysis. It's probably even stronger on the dynamic analysis part where we can really look in detail at any block of code. The tools give us reports that a quality manager, who may not know the code in detail, can use to make assessments on things like complexity, excessive knots, and the like. He then asks "Is this really necessary? Can we make this bit nicer?" The tools help analyze where the code really could be made cleaner, more understandable. It keeps us on track on things like comment ratios. You can cheat on comment ratios very easily. A mature engineering team can then look at the code and say, well actually, this doesn't really need commenting. There's no need to, say, write a comment that i++ increments i! We use Doors and model with UML using IBM Rational. LDRA is our main code analysis tool for unit testing, coverage testing and static analysis. Jack: I see a lot of older companies that have traditionally sold mechanical or electro-mechanical devices who have been forced to go into the microprocessor age. Management typically has no concept of what software is about or what software engineering requires. And when engineers ask for something that they can't put a property tag on, like a software tool, they find it very difficult to understand why that expense is necessary. Dave: There are countless examples of where just a small, sensible attitude to expense of software would have saved huge sums of money. One of the classics was that Mars rover when they had only run the Flash simulation on the ground before they launched it for 7 days. When you're launching a thing to Mars you would think that more lifelike testing would be at the top of a list of priorities. Jack: That was the Mars Exploration Rover when Spirit started grinding a rock and it suddenly went dead, because, like you say, the Flash file system, actually the directory structure, was full. The good news is that they were able to fix it, and the rover had a very successful mission. Dave: Yes, absolutely. It was just very weird that you wouldn't run that test case. Jack: It is sort of mind-boggling. I sometimes have to remind myself that software is a very new branch of engineering and people are still trying to figure it all out. But sometimes there's the smack-yourself-in-the forehead, obvious stuff you'd think that people would get. Dave: And it's changing very rapidly as well. Look at how microcontrollers have changed. We're now running hundreds of megahertz microcontrollers with vast flash resources. It's a very dynamic industry. Jack: Well, it certainly is and that's what keeps it interesting, that's for sure. Thanks so much for your time, and best wishes for the business. Thanks to Dave and David. You can learn more about their company and products at hcc-embedded.com .
  • 热度 14
    2015-3-13 21:36
    1638 次阅读|
    1 个评论
    At the Embedded Systems Conference I had an opportunity to sit down with Dave Hughes and David Brook from HCC Embedded . This company has a variety of software products, like file systems, network stacks, boot loaders, and the like, which customers can integrate into their products to get to market faster. Regular readers know how deeply I care about getting firmware right. HCC shares that devotion and I was impressed with the principals of that company. Later, we had a telephone call and explored their approaches more deeply. They were kind enough to allow me to reproduce it here. This has been edited for clarity and length.   In the following Dave talks about MC/DC. This is short for modified condition/decision coverage. It’s an extreme form of testing that ensures every statement in a program has been executed, and that all decisions, and conditions within a decision, have been executed. It’s required at the highest level of the DO-178B/C standard, which is the one required for commercial avionics. MC/DC is tedious and expensive. But so far, as far as we know, no one has lost a life due to software problems in commercial aviation.   Jack: Tell me about HCC Embedded. Dave: Well, HCC has been developing embedded software for the last 15 years or so, in various specialist areas. We started off specializing in file systems that were very fail-safe, and later moved into other areas. Today we have about 6 different file systems. We now have USB stacks with device hosts and numerous class drivers, network stacks, TCP/IP, and schedulers. We focused on creating embedded software that's extremely flexible and reusable. We started off on 8-bit architectures and moved to 16, 32 and 64-bit CPUs, supporting different tool chains, different endianness, different RTOSes, etc. A goal was to architect pieces of embedded software that are independent of these platform specific attributes. If you write a piece of C code, it shouldn't matter what your platform is. And so we've spent a long time trying to architect software in a way that it's scalable and reusable. Of course, one of the natural progressions is that, as you get higher quality, as you get this portability, you get a level of scalability and reusability that you wouldn't get if you don't go to extreme lengths in trying to develop your software deeper. Jack: So what kind of industries and customers do you typically sell into? Dave: Absolutely everything. I don't think there are any big exceptions. I suppose the high end of aerospace we haven't touched, but pretty much everything else we have been involved in. A lot of small instrumentation companies, functional safety companies, automotive companies, industrial companies, telephone companies. I mean, every field. Because the components we develop are very generic, they really aren't verticalized. We've developed one product in the last couple of years which is verticalized, which is a file system specifically for smart meters that is designed for getting the best usage out of flash. It maps the data structures that meters tend to use directly to the flash in a way that's manageable and efficient. But in general, our products have been completely agnostic about their final applications. Jack: When we were talking in California a few months ago, one of the things that struck me is that you do have a tremendous focus on quality; much more so than I normally hear talking to vendors. How do you go about ensuring that your products truly are of a very high quality and essentially defect free? Dave: You raise all sorts of issues in that question!  Guaranteeing defect free code is something we can never do. I mean, that's well proven. You can go back to Turing, you can go back to Godel, if you take a mathematical view of it. But what we can do is we can make best efforts to make things of a very high quality and extremely reliable. We started developing software using what I would call "freestyle” with basic quality goals. We had a coding standard, we had a style guide, we make sure there's no warnings, using the highest warning levels, etc. But things have progressed and we wanted to create products that are more versatile and more reusable. And we also feel that if we make a product that is endlessly checked, then it just doesn't need to be developed any further. We will end up with a very low level of support and, because it's reusable across many platforms, we can finish it and forget it. This is a very, very long process. We've been developing processes to actually ensure that pieces of software that we produce now are of higher and higher quality. Some are more challenging than others. For example, it's a very big project to create file systems that are of the highest quality. It’s difficult and awkward to do full dynamic coverage MC/DC on things like long name handling and directory structures. So we've adopted very strict coding standards and have taken MISRA very seriously. And we've built on to that a whole V-model which is being improved all the time. So, for example, our scheduler, our encryption module, our TLS module, have all been developed with much more rigorous standards employed. And they include having a requirement specification, having design documents, having tests that match back to the requirements. A lot of code is being released now with a very high level of built-in testing. For example, we'll build a scheduler with a test suite that executes full MC/DC coverage on the scheduler. The beauty of this is if you then take that scheduler and put it into a different architecture, onto a different endianness, on a different compiler, with different optimization settings, it doesn't matter what you do. You execute that same test suite and you’ll know that your scheduler has got full MC/DC coverage. Every single path with every single decision, has been checked that it has been compiled correctly. It doesn't prove your code is fault free by any means. I mean, that also needs to be tied into the design. That has to be tied into the tests, which need to be mapped back, etc. It is a means of guaranteeing that a piece of code that has been developed for one platform can be reused across platforms and across industries, and can be verifiably proved to do the same things. If problems are reported, then those fixes will be propagated back across all those platforms because they're going to have to go into the same test suite that does the same coverage and ensures it works very cleanly. So you're getting benefits of scalability and reusability. It brings me on to one of my pet lambs, which is how badly structured the software industry is, in terms of economics. Hardware manufacturers get paid for their actual piece of hardware they sell. Software is a bit more abstract and there's no clear path to getting remuneration for these things. You take something like this crazy, open SSL situation where you have a piece of software which you think should be developed to a really high standard. Now, this isn't really open SSL's problem, this really isn't their problem. They're quite open and clean about what they do. But what's not clear is why nobody has insisted that security software is developed to the same sort of levels as, say, an industrial controller is developed to. It seems like our personal security ought to be taken seriously. This SSL software was installed on something like 500,000 different servers, controlling millions and millions of peoples' web accesses. The problem cost millions in terms of the amount of corrective action that was required, re-issuing certificates and everything else. That code could have been re-written to an extremely high quality for, in my estimation, 1-2 million dollars or so. An absolute fraction of the amount it cost the industry. An absolute fraction of all the time wasted. Imagine all the board meetings that took place to review their company’s security. But there's no obvious way, in the software industry, to recoup the relatively small investment.   Next week I’ll run the rest of our discussion, which covers the often dysfunctional software provided by semiconductor vendors, tools, and more on the processes HCC-Embedded uses to ensure very high levels of quality.