tag 标签: software development

相关博文
  • 热度 21
    2011-8-7 21:58
    1843 次阅读|
    0 个评论
    George Dinwiddie. a good friend, maintains a blog that focuses on software development. O ne of his postings particularly got my attention. In that blog post, George discusses the balance between people and process. He faults organizations that find the most productive developer, and then tries to clone him across the team by duplicating whatever processes he uses. I agree that even though this approach is seductive to some managers it's doomed to failure. But there's a more fundamental problem: without productivity metrics it's impossible to know if the team is getting better or worse. In my travels I find few firmware groups that measure anything related to performance. Or quality. There are lots of vague statements about "improving productivity/quality," and it's not unusual to hear a dictate from on-high to build a "best-in-class" development team. But those statements are meaningless without metrics that establish both where the group is now, and the desired outcome. Some teams will make a change, perhaps adopting a new agile approach, and then loudly broadcast their successes. But without numbers I'm skeptical. Is the result real? Measurable? Is it a momentary Hawthorne blip ? Developers will sometimes report a strong feeling that "things are better." That's swell. But it's not engineering. Engineering is the use of science, technology, skilled people, process, a body of knowledge – and measurements – to solve a problem. Engineers use metrics to gauge a parameter. The meter reads 2.5k ohms when nominal is 2k. That signal's rise time is twice what we designed for. The ISR runs in 83µsec, yet the interrupt comes every 70. We promised to deliver the product for $23 in quantities of 100k, and so have to engineer two bucks out of the cost of goods. Some groups get it right. I was accosted at the ESC by a VP who experienced a 40% schedule improvement by the use of code inspections and standards. He had baseline data to compare against. That's real engineering. When a developer or team lead reports a "sense" or a "feeling" that things are better, that's not an engineering assessment. It's religion. Metrics are tricky. People are smart and will do what is needed to maximize their return. I remember instituting productivity goals in an electronics manufacturing facility. The workers started cranking out more gear, but quality slumped. Adding a metric for that caused inventory to skyrocket as the techs just tossed broken boards in a pile, rather than repair them. Metrics are even more difficult in software engineering. Like an impressionistic painting they yield a fuzzy view. But data with error bands is far superior to no data at all.
  • 热度 22
    2011-7-22 22:29
    2218 次阅读|
    0 个评论
    My good friend George Dinwiddie keeps a blog about software development, and one of his recent postings got me particularly interested. In that post, he tackles the balance between people and process. He faults organizations that find the most productive developer, and then tries to clone him across the team by duplicating whatever processes he uses. I agree that even though this approach is seductive to some managers it's doomed to failure. But there's a more fundamental problem: without productivity metrics it's impossible to know if the team is getting better or worse. In my travels I find few firmware groups that measure anything related to performance. Or quality. There are lots of vague statements about "improving productivity/quality," and it's not unusual to hear a dictate from on-high to build a "best-in-class" development team. But those statements are meaningless without metrics that establish both where the group is now, and the desired outcome. Some teams will make a change, perhaps adopting a new agile approach, and then loudly broadcast their successes. But without numbers I'm skeptical. Is the result real? Measurable? Is it a momentary Hawthorne blip ? Developers will sometimes report a strong feeling that "things are better." That's swell. But it's not engineering. Engineering is the use of science, technology, skilled people, process, a body of knowledge – and measurements – to solve a problem. Engineers use metrics to gauge a parameter. The meter reads 2.5k ohms when nominal is 2k. That signal's rise time is twice what we designed for. The ISR runs in 83µsec, yet the interrupt comes every 70. We promised to deliver the product for $23 in quantities of 100k, and so have to engineer two bucks out of the cost of goods. Some groups get it right. I was accosted at the ESC by a VP who experienced a 40% schedule improvement by the use of code inspections and standards. He had baseline data to compare against. That's real engineering. When a developer or team lead reports a "sense" or a "feeling" that things are better, that's not an engineering assessment. It's religion. Metrics are tricky. People are smart and will do what is needed to maximize their return. I remember instituting productivity goals in an electronics manufacturing facility. The workers started cranking out more gear, but quality slumped. Adding a metric for that caused inventory to skyrocket as the techs just tossed broken boards in a pile, rather than repair them. Metrics are even more difficult in software engineering. Like an impressionistic painting they yield a fuzzy view. But data with error bands is far superior to no data at all.  
  • 热度 17
    2011-7-8 12:40
    1442 次阅读|
    1 个评论
    A quote from Donald E. Knuth : " Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do. " Al Stavely's new book, " Writing in Software Development ," is a sorely-needed manifesto for the art of writing when creating software. I read a lot of code and associated documentation, and, alas, find the state of the art to be essentially at a grade school level. It's ironic that engineers produce, more than anything, documents that are supposed to communicate ideas, yet we're so poor at written communication. Al's book is the Dutch boy's finger in the dike, a well-written attempt to show us what we must do. Some of the book is patently obvious. Much is thought-provoking. The gestalt, though, really defines the essential work of a software developer. It's an inexpensive (as an e-book) work that's a fast read. The book takes some swipes at sacred cows like C since it "isn't even object-oriented!" C, though, is the lingua franca of the embedded world. One suggestion is to use a digital camera to capture diagrams drawn on a whiteboard. I first saw this a couple of years ago when working with a attorney who captured all whiteboard presentations on his camera. Months later he'd still refer to those images. Gray cells are fragile containers of knowledge; digital representations preserve that information for as long as needed. Another suggestion is to maintain documentation during development, making rough margin notes as you're working, and cleaning them up at the end of the project. I disagree. You won't have time, so those rough notes will be lost. Keep the docs clean all of the time, as we're supposed to do with the code. In other words, keep refactoring the docs. What about Doxygen ? Al likes it, but makes the important point that Doxygen adds no new information at all. It's a useful tool, but no substitute for careful documentation. Al stresses using assertions as documentation. in "Software Aging," computer scientist David Parnas said " Just as in other kinds of engineering documentation, software documentation must be based on mathematics ." Assertions can be a formal way of specifying the intended behavior of a function. Design by Contract, which takes assertions further by converting them into pre- and post-conditions, is one of the most powerful tools we can employ, both to generate correct code, and to document our intentions. One interesting idea, drawn from Knuth's Literate Programming , is to write all of our code in a word processor, complete with all of the annotations needed. Using html tags and a relatively simple script, the build process strips the code from the file and feeds that to the compiler. A cool idea, but that generates problems when debugging as the "source" file is not the same as the file used by the IDE. When, oh when, will the compiler vendors wean us from brain-dead text files, and allow us the formatting options we have given the word processing world for decades? All in all, it's a great read and very worthwhile book. Get it here: http://press.nmt.edu/ .
  • 热度 10
    2011-6-26 10:11
    1798 次阅读|
    0 个评论
    Here's a quote from Donald E. Knuth : " Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do. " Al Stavely's new book, " Writing in Software Development ," is a sorely-needed manifesto for the art of writing when creating software. I read a lot of code and associated documentation, and, alas, find the state of the art to be essentially at a grade school level. It's ironic that engineers produce, more than anything, documents that are supposed to communicate ideas, yet we're so poor at written communication. Al's book is the Dutch boy's finger in the dike, a well-written attempt to show us what we must do. Some of the book is patently obvious. Much is thought-provoking. The gestalt, though, really defines the essential work of a software developer. It's an inexpensive (as an e-book) work that's a fast read. The book takes some swipes at sacred cows like C since it "isn't even object-oriented!" C, though, is the lingua franca of the embedded world. One suggestion is to use a digital camera to capture diagrams drawn on a whiteboard. I first saw this a couple of years ago when working with a attorney who captured all whiteboard presentations on his camera. Months later he'd still refer to those images. Gray cells are fragile containers of knowledge; digital representations preserve that information for as long as needed. Another suggestion is to maintain documentation during development, making rough margin notes as you're working, and cleaning them up at the end of the project. I disagree. You won't have time, so those rough notes will be lost. Keep the docs clean all of the time, as we're supposed to do with the code. In other words, keep refactoring the docs. What about Doxygen ? Al likes it, but makes the important point that Doxygen adds no new information at all. It's a useful tool, but no substitute for careful documentation. Al stresses using assertions as documentation. in "Software Aging," computer scientist David Parnas said " Just as in other kinds of engineering documentation, software documentation must be based on mathematics ." Assertions can be a formal way of specifying the intended behavior of a function. Design by Contract, which takes assertions further by converting them into pre- and post-conditions, is one of the most powerful tools we can employ, both to generate correct code, and to document our intentions. One interesting idea, drawn from Knuth's Literate Programming , is to write all of our code in a word processor, complete with all of the annotations needed. Using html tags and a relatively simple script, the build process strips the code from the file and feeds that to the compiler. A cool idea, but that generates problems when debugging as the "source" file is not the same as the file used by the IDE. When, oh when, will the compiler vendors wean us from brain-dead text files, and allow us the formatting options we have given the word processing world for decades? All in all, it's a great read and very worthwhile book. Get it here: http://press.nmt.edu/ .  
  • 热度 15
    2011-3-7 17:36
    2012 次阅读|
    0 个评论
    I once wrote an introduction to the concept of code inspections, a technique that can hugely accelerate schedules by eliminating bugs before they percolate into testing. I drew an analogy to the quality movement that revolutionized manufacturing. Others made a similar comparison, notably Mary and Tom Poppendieck in their recommended book, Lean Software Development . 1 In manufacturing one strives to eliminate waste, which, for firmware, includes bugs injected into the code. Code inspections are nothing new; they were first formally described by Michael Fagan in his seminal 1976 paper "Design and Code Inspections to Reduce Errors in Program Development." 2 But even in 1976 engineers had long employed design reviews to find errors before investing serious money in building a PCB, so the idea of having other developers check one's work before "building" or testing it was hardly novel. Fagan's approach is in common use, though many groups tune it for their particular needs. I think it's a bit too "heavy" for most of us, though those working on safety-critical devices often use it pretty much unmodified. I'll describe a practical approach used by quite a few embedded developers. First, the objective of an inspection is to find bugs. Comments like "man, this code sucks!" are really attacks on the author and are inappropriate. Similarly, metrics one collects are not to be used to evaluate developers. Secondly, all new code gets inspected. There are no exceptions. Teams that exclude certain types of routines because either they're hard ("no one understands DMA here") or because only one person really understands what's going on will find excuses to skip inspections on other sorts of code, and will eventually give up inspections altogether. If only Joe understands the domain or the operation of a particular function, the process of inspection spreads that wisdom and lessens risk if Joe were to get ill or quit. We only inspect new code because there just isn't time to pour over countless lines of stuff inherited in an acquisition. In an ideal world an inspection team has four members plus maybe a newbie or two that attend just to learn how the products work. In reality it might be hard to find four people so fewer folks will participate. But there are four roles, all of which must be filled, even if it means one person serves in two capacities: A moderator runs the inspection process. He finds a conference room, distributes listings to the participants, and runs the meeting. That person is one of us, not a manager, and smart teams insure everyone takes turns being moderator so one person isn't viewed as the perennial bad guy. A reader   looks at the code and translates it into an English-language description of the statement's intent. The reader doesn't say: "if (tank_pressmax _press)dump();" After all, the program is nothing more than a translation of an English-language spec into computerese. The reader converts the C back into English, in this case saying something like: "Here we check the tank pressure to see if it exceeds the maximum allowed, which is a constant defined in max_limits.h. If it's too high we call dump, which releases the safety valve to drop the pressure to a safe value." Everyone else is reading along to see if they agree with the translation, and to see if this is indeed what the code should do. The author is present to provide insight into his intentions when those are not clear (which is a sign there's a problem with either the code or the documentation). If a people shortage means you've doubled up roles, the author may not also be the reader. In writing prose it has long been known that editing your own work is fraught with risk: you see what you thought you wrote, not what's on the paper. The same is true for code. A recorder logs the problems found. During the meeting we don't invent fixes for problems. The author is a smart person we respect who will come up with solutions. Why tie up all of these expensive people in endless debates about the best fix? We do look for testability issues. If a function looks difficult to test then either it, or the test, needs to be rethought. Are all of the participants familiar enough with the code to do a thorough inspection? Yes, since before the meeting each inspects the code, alone, in the privacy of his or her own office, making notes as necessary. Sometimes it's awfully hard to get people other than the author to take the preparation phase seriously, so log each inspector's name in the function's header block. In other words, we all own this code, and each of us stamped our imprimatur on it. The inspection takes place after the code compiles cleanly. Let the tools find the easy problems. But do the inspection before any debugging or testing takes place. Of course everyone wants to do a bit of testing first just to ensure our colleagues aren't pointing out stupid mistakes. But since inspections are some 20 times cheaper than test, it's just a waste of company resources to test first. Writing code is a business endeavor; we're not being paid to have fun--though I sure hope we do--and such inefficient use of our time is dumb. Besides, in my experience when test precedes the inspection the author waltzes into the meeting and tosses off a cheery "but it works!" Few busy people can resist the temptation to accept "works", whatever that means, and the inspection gets dropped. My data shows that an ideal inspection rate is about 150 lines of code per hour. Others suggest slightly different numbers, but it's clear that working above a few hundred lines of code per hour is too fast to find most errors. 150 LOC/hr is an interesting number: if you've followed the rules and kept functions to no more than 50 LOC or so, you'll be inspecting perhaps a half-dozen functions per hour. That's 2.5 LOC/minute, slow enough to really put some thought into the issues. Of course, the truth is that inspections save time so have a negative cost. It's impossible to inspect large quantities of code. After an hour or two one's brain turns to pudding. Ideally, limit inspections to an hour a day. The best teams measure the effectiveness of their inspection process, and, in fact, measure how well they nab bugs pre-shipping in every phase of the development project. Two numbers are important: the defect potential, which is the total number of bugs found in the project between starting development and 90 days after shipping to customers, and the defect removal efficiency. The latter is a fancy phrase that is simply the percentage of bugs found pre-shipping.