tag 标签: metrics

相关博文
  • 热度 15
    2013-8-15 20:43
    1412 次阅读|
    0 个评论
    In a recent article , I underscored the importance of collecting metrics to understand and improve the software engineering process. It's astonishing how few teams do any measurements, which means few have any idea if they are improving, or getting worse. Or if their efforts are world class, or constitute professional malpractice. Two of the easiest and most important metrics are defect potential and defect removal efficiency. Capers Jones, one of the more prolific, and certainly one of the most important, researchers in software engineering pioneered these measurements. Defect potential is the total number of bugs found during development (tracked after the compiler gives a clean compile; ignore the syntax errors it finds) and for the first 90 days after shipping. Every bug reported, every mistake corrected in the code, counts. Sum this even for those that Joe fixes while he is buried in the IDE doing unit tests. No names need be tracked; this is all about the code, not the personalities. Defect removal efficiency is simply the percentage of those removed prior to shipping. One would hope for 100% but few achieve that. These two metrics are then used to do root cause analysis: Why did a bug get shipped? What process can we change so it doesn't happen again? How can we tune the bug filters to be more effective? Doing this well typically leads to a 10x reduction in shipped bugs over time. Here's some data from a client I worked with:   Over the course of seven quarters, they reduced the number of shipped bugs by better than an order of magnitude by doing this root cause analysis. What are common defect potentials? They are all over the map. Malpractice is when we ship 50 bugs/1,000 lines of code (KLOC). 1/KLOC is routinely achieved by disciplined teams, and 0.1/KLOC by world-class outfits. According to data Capers Jones shared with me, software in general has a defect removal efficiency of 87%. Firmware scores a hugely better 94%. We embedded people do an amazingly good job. But given that defect injection rates run 5 to 10%, at a million LOC 94% means we're shipping with over 3,000 bugs. What are your numbers? Do you track this, or anything?  
  • 热度 21
    2011-8-7 21:58
    1851 次阅读|
    0 个评论
    George Dinwiddie. a good friend, maintains a blog that focuses on software development. O ne of his postings particularly got my attention. In that blog post, George discusses the balance between people and process. He faults organizations that find the most productive developer, and then tries to clone him across the team by duplicating whatever processes he uses. I agree that even though this approach is seductive to some managers it's doomed to failure. But there's a more fundamental problem: without productivity metrics it's impossible to know if the team is getting better or worse. In my travels I find few firmware groups that measure anything related to performance. Or quality. There are lots of vague statements about "improving productivity/quality," and it's not unusual to hear a dictate from on-high to build a "best-in-class" development team. But those statements are meaningless without metrics that establish both where the group is now, and the desired outcome. Some teams will make a change, perhaps adopting a new agile approach, and then loudly broadcast their successes. But without numbers I'm skeptical. Is the result real? Measurable? Is it a momentary Hawthorne blip ? Developers will sometimes report a strong feeling that "things are better." That's swell. But it's not engineering. Engineering is the use of science, technology, skilled people, process, a body of knowledge – and measurements – to solve a problem. Engineers use metrics to gauge a parameter. The meter reads 2.5k ohms when nominal is 2k. That signal's rise time is twice what we designed for. The ISR runs in 83µsec, yet the interrupt comes every 70. We promised to deliver the product for $23 in quantities of 100k, and so have to engineer two bucks out of the cost of goods. Some groups get it right. I was accosted at the ESC by a VP who experienced a 40% schedule improvement by the use of code inspections and standards. He had baseline data to compare against. That's real engineering. When a developer or team lead reports a "sense" or a "feeling" that things are better, that's not an engineering assessment. It's religion. Metrics are tricky. People are smart and will do what is needed to maximize their return. I remember instituting productivity goals in an electronics manufacturing facility. The workers started cranking out more gear, but quality slumped. Adding a metric for that caused inventory to skyrocket as the techs just tossed broken boards in a pile, rather than repair them. Metrics are even more difficult in software engineering. Like an impressionistic painting they yield a fuzzy view. But data with error bands is far superior to no data at all.