tag 标签: code inspections

相关博文
  • 热度 13
    2011-6-9 18:05
    1749 次阅读|
    0 个评论
    Last 2009, I gave a talk at the Embedded Systems Conference which covered, among other subjects, code inspections.   Inspections are sure to get developers riled up. We all hate them. They go against our very notion of programming, in which we hunch over the keyboard while running the debugger through its paces, hoping to beat our code into submission. Then, the stuff is "ours;" we take an ownership interest in the code, and tolerate no criticism of it.   I hate inspections.   But they're efficient, and the point of building firmware is to achieve a business objective: to create working code as quickly and as accurately as possible. The stark facts show inspections to be the fastest way we know to find bugs.   A developer who attended the talk called this week and we had an interesting chat about the subject. Then two emails drifted in, both of which mirrored the telecom. The numbers varied, but here's the gist of the conversations:   The short summary: These folks tried inspections. They didn't work.   I asked: How much code did you inspect?   Answer: Uh, about 500 lines. Maybe 700.   How did you do the inspection?   Answer: We tested the code and found around 20-30 or so bugs. Perhaps more. Then we inspected and found another, well, maybe like 10 to 15.   How long did debugging and inspecting take?   Answer: We're not really sure but not too long.   My response was this: The numbers are clearly meaningless. They're guesses. Unless you collect real metrics this data " such as it is " tells us nothing.   How many bugs were left in the system? What makes you think you found them all? Do you know if you shipped with no bugs or 100?   Were the ones the inspection found mission-critical bugs?   None of the folks had tried inspecting before debugging, making a murky situation even dimmer.   We know that programming is different from building widgets on an assembly line. Yet too often people use that difference to excuse any attempt to measure and understand, quantitatively, the process of building firmware.   A big part of engineering is measuring things, whether its amps, volts, miles/hour, furlongs per fortnight, or bug rates. Software engineering is engineering only when we collect real metrics, numbers like inspection efficiency, bug rates, productivity, and quality.   There's a large body of knowledge about all of these subjects. If you don't collect your own metrics, there's no way to draw useful comparisons to the state of the art.   That's like driving with your eyes shut.    
  • 热度 7
    2011-6-9 17:55
    1567 次阅读|
    0 个评论
    Regular readers know I'm a big fan of code inspections. Overwhelming evidence and common sense has shown that two heads are better than one at finding problems, and adding another team member or two yields even better code. But inspections suck. Few developers really enjoy them. I sure don't, but we're professionals hired to build systems as efficiently and as perfectly as possible. A doctor might not enjoy giving prostate exams much, but to avoid such tests just out of personal distaste is both unprofessional and unacceptable. The same goes for the unsavory bits of our work. Because of this common aversion it's not uncommon for a team that starts with all of the best intentions to slowly find reasons to avoid reviewing code; that nearly always eventually results in inspections becoming completely abandoned. Because of this I feel inspections are one of the few areas where Genghis Khan software management is appropriate. A strong leader insists on their continued use. That person both supplies the resources needed and audits the process to insure compliance. But you don't want management on the inspection team ( developers tend to avoid pointing out errors when the boss is around, out of respect for their colleagues ). So how can one insure the team is doing the right things? The solution is root cause analysis , used statistically. From time to time the boss should identify a bug that was found and perhaps fixed, and have the team or even a member of the team figure out how the mistake made it into the code. Was an inspection on the affected code ever done? How much time was devoted to it? Were other problems identified? ( This implies collecting metrics, which is nearly painless when automated with tools like SmartBear's Code Collaborator ). It's possible the function was unusually complex and the review did find many other problems. So, were complexity metrics taken? Or gasp did the developers shortchange the inspection or skip it altogether? Perhaps the bug slipped in later, post-inspection, due to poor change management procedures. Bugs are part of the process, but proper bug management should be an equally important component. It's not possible to do a root-cause analysis of every, or even most, problems. But some level of it keeps the developers on the proper track and can identify flaws in the system that cause delays and quality problems. C.A.R. Hoare said: "The real value of tests is not that they detect bugs in the code but that they detect inadequacies in the methods, concentration, and skills of those who design and produce the code." And that observation is also true for looking for root causes of at least some of the bugs.
  • 热度 15
    2011-3-7 17:36
    2018 次阅读|
    0 个评论
    I once wrote an introduction to the concept of code inspections, a technique that can hugely accelerate schedules by eliminating bugs before they percolate into testing. I drew an analogy to the quality movement that revolutionized manufacturing. Others made a similar comparison, notably Mary and Tom Poppendieck in their recommended book, Lean Software Development . 1 In manufacturing one strives to eliminate waste, which, for firmware, includes bugs injected into the code. Code inspections are nothing new; they were first formally described by Michael Fagan in his seminal 1976 paper "Design and Code Inspections to Reduce Errors in Program Development." 2 But even in 1976 engineers had long employed design reviews to find errors before investing serious money in building a PCB, so the idea of having other developers check one's work before "building" or testing it was hardly novel. Fagan's approach is in common use, though many groups tune it for their particular needs. I think it's a bit too "heavy" for most of us, though those working on safety-critical devices often use it pretty much unmodified. I'll describe a practical approach used by quite a few embedded developers. First, the objective of an inspection is to find bugs. Comments like "man, this code sucks!" are really attacks on the author and are inappropriate. Similarly, metrics one collects are not to be used to evaluate developers. Secondly, all new code gets inspected. There are no exceptions. Teams that exclude certain types of routines because either they're hard ("no one understands DMA here") or because only one person really understands what's going on will find excuses to skip inspections on other sorts of code, and will eventually give up inspections altogether. If only Joe understands the domain or the operation of a particular function, the process of inspection spreads that wisdom and lessens risk if Joe were to get ill or quit. We only inspect new code because there just isn't time to pour over countless lines of stuff inherited in an acquisition. In an ideal world an inspection team has four members plus maybe a newbie or two that attend just to learn how the products work. In reality it might be hard to find four people so fewer folks will participate. But there are four roles, all of which must be filled, even if it means one person serves in two capacities: A moderator runs the inspection process. He finds a conference room, distributes listings to the participants, and runs the meeting. That person is one of us, not a manager, and smart teams insure everyone takes turns being moderator so one person isn't viewed as the perennial bad guy. A reader   looks at the code and translates it into an English-language description of the statement's intent. The reader doesn't say: "if (tank_pressmax _press)dump();" After all, the program is nothing more than a translation of an English-language spec into computerese. The reader converts the C back into English, in this case saying something like: "Here we check the tank pressure to see if it exceeds the maximum allowed, which is a constant defined in max_limits.h. If it's too high we call dump, which releases the safety valve to drop the pressure to a safe value." Everyone else is reading along to see if they agree with the translation, and to see if this is indeed what the code should do. The author is present to provide insight into his intentions when those are not clear (which is a sign there's a problem with either the code or the documentation). If a people shortage means you've doubled up roles, the author may not also be the reader. In writing prose it has long been known that editing your own work is fraught with risk: you see what you thought you wrote, not what's on the paper. The same is true for code. A recorder logs the problems found. During the meeting we don't invent fixes for problems. The author is a smart person we respect who will come up with solutions. Why tie up all of these expensive people in endless debates about the best fix? We do look for testability issues. If a function looks difficult to test then either it, or the test, needs to be rethought. Are all of the participants familiar enough with the code to do a thorough inspection? Yes, since before the meeting each inspects the code, alone, in the privacy of his or her own office, making notes as necessary. Sometimes it's awfully hard to get people other than the author to take the preparation phase seriously, so log each inspector's name in the function's header block. In other words, we all own this code, and each of us stamped our imprimatur on it. The inspection takes place after the code compiles cleanly. Let the tools find the easy problems. But do the inspection before any debugging or testing takes place. Of course everyone wants to do a bit of testing first just to ensure our colleagues aren't pointing out stupid mistakes. But since inspections are some 20 times cheaper than test, it's just a waste of company resources to test first. Writing code is a business endeavor; we're not being paid to have fun--though I sure hope we do--and such inefficient use of our time is dumb. Besides, in my experience when test precedes the inspection the author waltzes into the meeting and tosses off a cheery "but it works!" Few busy people can resist the temptation to accept "works", whatever that means, and the inspection gets dropped. My data shows that an ideal inspection rate is about 150 lines of code per hour. Others suggest slightly different numbers, but it's clear that working above a few hundred lines of code per hour is too fast to find most errors. 150 LOC/hr is an interesting number: if you've followed the rules and kept functions to no more than 50 LOC or so, you'll be inspecting perhaps a half-dozen functions per hour. That's 2.5 LOC/minute, slow enough to really put some thought into the issues. Of course, the truth is that inspections save time so have a negative cost. It's impossible to inspect large quantities of code. After an hour or two one's brain turns to pudding. Ideally, limit inspections to an hour a day. The best teams measure the effectiveness of their inspection process, and, in fact, measure how well they nab bugs pre-shipping in every phase of the development project. Two numbers are important: the defect potential, which is the total number of bugs found in the project between starting development and 90 days after shipping to customers, and the defect removal efficiency. The latter is a fancy phrase that is simply the percentage of bugs found pre-shipping.  
  • 热度 8
    2011-3-4 14:33
    1774 次阅读|
    0 个评论