tag 标签: Agile

相关帖子
相关博文
  • 热度 26
    2015-8-14 19:51
    1462 次阅读|
    1 个评论
    Capers Jones is one of the most prolific software researchers around. He has a vast collection of metrics from many thousands of real-world projects. His book The Economics of Software Quality is rather dry but full of fun facts. He’s careful to point out that the data has huge standard deviations, but it does paint some interesting pictures about the nature of software engineering.   Chapter 2 is called “Estimating and Measuring Software Quality.” Though all of the numbers are interesting those for requirements are especially so.   Mr. Jones is adamant that we shouldn’t use lines of code (LOC, or KLOC for thousands of lines of code) as a metric. He prefers function points. While it’s hard to dispute his arguments in favor of function points, few practicing developers understand them, which makes the metric a barrier to communication. Most sources figure one function point is around 100-120 lines of C code, so here I’ve converted his numbers to C code using 100 LOC = 1 function point.   Let’s look at his numbers for the size of requirements. For an application of 10 KLOC developers typically get 115 pages of requirements, which are 95% complete. That’s roughly a page per 100 LOC, or something like one line per two LOC. Of course, some percentage of those requirements would be graphical, but his numbers suggest that 75% of requirements are text. Does that mirror your experience? That’s a lot of text for a couple of lines of code.   But it gets worse as applications grow in size. At 100 KLOC figure on 750 pages of requirements, which will only be 80% complete. At 1 million lines of code there will be 6,000 pages of, probably dreary, conflicting, and poorly-specified requirements, comprising just 60% of what is expected to be delivered. In other words, those 6K pages specify just about half of the desired functionality. No wonder big systems are delivered so late. Mr. Jones makes the scary point that at 5 million LOC it would be impossible for a single person to read the requirements in one lifetime!   Small systems don’t experience much requirements churn; for projects of 10 KLOC figure on 1% growth/month, or about 225 LOC change over the duration of the project. At 1 million LOC that bumps to 1.25% growth/month resulting in an extra almost 300 KLOC.   What about requirements defects? A 10 KLOC system will have 75 of those, of which 8 are typically delivered to the customer. At 1 million LOC that jumps to over 10,000 of those kinds of defects, 2000 of which will wind up in the user’s hands.   Now, these numbers are for all sorts of software projects. He points out that embedded projects experience only 20% of the defects of the numbers he presents. That’s still 400 delivered requirements defects for a big project. Of course, there will be plenty of other bugs in the final project from other phases of development, but those numbers are fodder for a different column.   One of the rationales for agile methods is, as Kent Beck puts it, everything changes all of the time. One can’t know, it’s reasoned, what the customer wants so it makes sense to deliver early and incrementally. I’ve always agreed with the conclusion, but not necessarily with the thesis. Though there are exceptions, in my experience most embedded projects can get a decent, if imperfect, set of requirements early in the project. Admittedly, it can be hard to elicit them, but “hard” is no excuse for abdicating our responsibility to work diligently to clarify our goals.   Given that firmware has only 20% of the requirements defects experienced by other sorts of software, two things jump out. First, we’re doing a heck of a job! Second, the notion of using agile to elicit requirements probably doesn’t make sense in this space (though implementation may benefit from agile ideas). Traditional approaches seem to be working pretty well, and to determine requirements in an agile, incremental, manner demands an awful lot of customer participation. eXtreme Programming practitioners require an on-site customer; in recent years some have suggested than an entire customer team be co-located with the developers to wring out desired product functionality. That’s unrealistic for many projects.   As regular readers know I think careful engineering requires the use of metrics to understand both what we’re building, as well as how we’re building it. Do you track anything about requirements, like size, churn or defects?
  • 热度 14
    2014-8-29 16:47
    1990 次阅读|
    0 个评论
    The recent months haven’t been great for the aviation industry. A spike in accidents has many people alarmed. My wife is one of them. While she’s not afraid to fly, she is a somewhat fearful flyer. After one quite bumpy approach and landing in Denver Marybeth said she’d never go to, or via, that city again. To date, she hasn’t. I fly a lot, having logged 3 or 4 million miles over the decades. I’ve heard people scream, pray, have had seatmate strangers grip my hand in turbulence, and have been on flights where a passenger passed away, though not as a result of the flying. On a couple of trips we’ve landed with everyone in crash position, once I watched flames leaping from a wheel, and on another memorable journey the pilot did four go-arounds before getting the plane down on the fifth try. During the first Gulf War (I’m starting to lose count of them) on a non-stop from Munich to London the plane went into a crash dive and made an unexpected and abrupt landing in Dusseldorf where we were marched through metal detectors, Americans getting additional scrutiny, before reboarding. No explanations were given. But those cases are a handful compared to vast majority of flights I’ve been on, most of which are characterized by boredom and fatigue. Even the exciting ones ended with us shuffling out the jetway with not a single injury other than perhaps some GI discomfort from the so-called “meals.” A bit of Googling suggests there are around 93,000 commercial flights per day from 9000 airports around the world. That’s around 35 million flights per year, with only a microscopic percentage resulting in a crash. Flying is the safest mode of travel ever devised, safer even than walking . In 1956 a Super Constellation collided with a DC-7 over the Grand Canyon, killing 128 people. Partly as a result of this accident the CAA morphed into the FAA and mandated “black boxes” on commercial aircraft. The idea is that we need to learn from these disasters, and one part of that is to instrument the aircraft with survivable telemetry. The results have been stunning:   When Air France 447 went down in the Atlantic authorities searched for the black boxes for two years before finding them. Some $50 million in extra funding has just been approved to extend the search for Malaysia Air 370’s recorders. Is there any other industry that is willing to spend so much to avoid making the same mistake twice? Last week I reviewed Bertrand Meyer’s book Agile!. Some Agile methods require retrospectives, a practice that makes a lot of sense. A retrospective is one form of a black box for a software engineering effort: we devote time and resources to learn from our failures. We collect metrics during the project – that is, we instrument the effort – and use those numbers and more qualitative parameters to constantly improve. A project might consume hundreds of thousands of dollars (or much more) of engineering resources. How foolish it is that so many aren’t willing to invest a tiny fraction of that in a retrospective as a “force multiplier” to save a bundle on future projects! It’s easy to dismiss instrumenting projects as a feel-good practice without demonstrated benefits. I feel passionately that engineering without numbers is really art, and despair that so many of us are willing to argue for practices that are not substantiated by metrics. So here’s one example of many from a company I worked with. Their instrumentation included bugs per thousand lines of code over seven quarters. Each quarter the results were analyzed to tune their engineering:   Bugs per KLOC over 7 quarters The cost to collect the data and inoculate their engineering? Negative. After about two years schedules had been shortened by 40%. This is a very old and well-known adage from the quality movement: higher quality products cost less. It has repeatedly been shown to be true in software engineering as well. Does your team have a metaphorical black box? Do you perform retrospectives? Do you collect any metrics? Why or why not? This discussion reminds me of a story: A 747 is flying across the Atlantic when an engine fails. The pilot gets on the PA and assures the passengers that the aircraft is perfectly able to fly on three engines; however, they will be about twenty minutes late arriving at their destination. A little while later a second engine fails, and the pilot makes the same announcement. This time, he says they will now be about 40 minutes late arriving at their destination. A third engine fails, and the pilot says their arrival will be about an hour late. One passenger turns to another and says, “If that last engine fails, we'll be up here all day!”
  • 热度 18
    2014-8-21 17:37
    1857 次阅读|
    0 个评论
    Bertrand Meyer is one of the most literate authors in computer science. His latest work, Agile!, is an example. It’s a 170 page summary and critique of the leading Agile methods. The introduction gives his theme: “The first presentations of structured programming, object technology, and design patterns… were enthusiastically promoting new ideas, but did not ignore the rules of rational discourse. With Agile methods you are asked to kneel down and start praying.” The book burns with contempt for the eXtreme attitudes of many of Agile’s leading proponents. He asks us to spurn the “proof by anecdote” so many Agile books use as a substitute for even a degraded form of the scientific method. This is a delightful, fun, witty, and sometimes snarky book. The cover’s image is a metaphor for the word “Agile”: a graceful ballet dancer leaping through the air. He contrasts that with the Agile manifesto poster: “The sight of a half-dozen middle-aged, jeans-clad, potbellied gentlemen turning their generous behinds toward us…”! He quotes Kent Beck’s statement “Software development is full of the waste of overproduction, requirements documents that rapidly go obsolete.” Meyer could have, and should have, made a similar statement about software going obsolete, and that both software and requirements suffer from entropy, so require a constant infusion of maintenance effort. But he writes “The resulting charge against requirements, however, is largely hitting at a strawman. No serious software engineering text advocates freezing requirements at the beginning. The requirements document is just one of the artifacts of software development, along with code modules and regression tests (for many agilists, the only artifacts worthy of consideration) but also documentation, architecture descriptions, development plans, test plans, and schedules. In other words, requirements are software . Like other components of the software, requirements should be treated as an asset; and like all of them, they can change”. (Emphasis in original). He describes the top seven rhetorical traps used by many Agile proponents. One is unverified claims. But then he falls into his own trap by saying “refactored junk is still junk.” The book’s subtitle is “The good, the hype, and the ugly,” and he uses this to parse many Agile beliefs. Meyer goes further and adds the “new,” with plenty of paragraphs describing why many of these beliefs are actually very old aspects of software engineering. I don’t see why that matters. If Agile authors coopt old ideas they are merely standing on the shoulders of giants, which is how progress is always made (note that last clause is an unverified claim!). The book is not a smackdown of Agile methods. It’s a pretty-carefully reasoned critique of the individual and collective practices. He does an excellent job of highlighting those of the practices he feels have advanced the state of the art of software engineering, while in a generally fair way showing that some of the other ideas are examples of the emperor having no clothes. For instance, he heaps praise on daily stand up meetings (which Meyer admits are not always practical, say in a distributed team), Scrum’s instance on closing the window on changes during a sprint, one month sprints, and measuring a project’s velocity. (I, too, like the Agile way of measuring progress but hate the word “velocity” in this context. Words have meaning. Velocity is a vector of speed and direction, and in Agile “velocity” is used, incorrectly, to mean speed). One chapter is a summary of each of the most common Agile methods, but the space devoted to each is so minimal that those not familiar with each approach will learn little. Agile! concludes with a chapter listing practices that are “bad and ugly,” like the deprecation of up-front tasks (e.g., careful requirements gathering), “the hyped,” like pair programming (“hyped beyond reason”), “the good,” and “the brilliant.” Examples of the latter include short iterations, continuous integration, and the focus on test. The book is sure to infuriate some. Too many people treat Agile methods as religion, and any debate is treated as heresy. Many approaches to software engineering have been tried over the last 60 years and many discarded. Most, though, contributed some enduring ideas that have advanced our body of knowledge. I suspect that a decade or two hence the best parts of Agile will be subsumed into how we develop code, with new as-yet-unimagined concepts grafted on. I agree with most of what Meyer writes. Many of the practices are brilliant. Some are daft, at least in the context of writing embedded firmware. In other domains like web design perhaps XP is the Only Approach To Use. Unless TDD is the One True Answer. Or Scrum. Or FDD. Or Crystal…  
  • 热度 15
    2012-7-11 16:07
    1584 次阅读|
    0 个评论
    I'm a tool guy. Be it a piston ring compressor, a roaring planer or a decent IDE I've always bought the best I could afford. Actually, "always" isn't quite correct as I have succumbed to the temptation of Harbor Freight a few times. But cheap tools have always been a disappointment. There was that belt sander; it ran so hot one could hardly hold on to it. Every time I used it it ticked me off. Eventually I donated it and got a decent unit. Or the come-along that wouldn't release under load. But a great tool makes one smile every time it's used. The Husqvarna blower. The xplorer2 replacement for that cursed Vista version of Windows Explorer. A great tool makes you more efficient and productive. Tools coupled to a disciplined process are even better. Four decades in this industry have taught me a few truths. One is the importance of using coding standards. I have yet to see companies consistently generating best-in-class code unless it's done to a standard. One friend takes pride in the fact that no one can tell who wrote different sections of his company's products, since everyone does it all to an in-house standard. That sort of mirrors the notion of egoless programming originally promoted by Jerry Weinberg. Until recently the use of standards was pretty spotty. MISRA-C , though, has started to change that. I run into a lot of companies today that embrace at least a substantial subset of MISRA. Lots of standards exist. CERT has one for secure programming. The F-35 has its own (called the JSF standard, for Joint Strike Fighter). Barr Group, too, publishes one as does NASA's JPL . But most people use manual inspections to enforce the rules, which is a ludicrous activity since it can be automated. Any time a tool can do something, you must use a tool to do that activity. When it comes to checking code against a standard, there really aren't many tools around, which seems sort of whacky considering how much time they could save. PC-Lint will check code against the MISRA rules, or at least most of them ( some are hard to enforce statically ). C++ Test by Parasoft can check your code against rules you define. But I've yet to meet anyone in the embedded world who uses it. One very cool tool that seems to fit the needs of this industry is Testbed by LDRA ( see my screen shot below ). What I like about this product is that it includes around twenty different software standards spread over 1000+ rules. You can tell it to check against a standard, or some mix rules from a number of standards. As the following screen shot shows, one can just select all MISRA-C, or select your own rule set.     The agile community demands we automate our tests, which is great advice ( when possible ). Similarly, we should automate everything that can be delegated to the mindless yet never bored computer.  
  • 热度 19
    2011-12-6 11:00
    1612 次阅读|
    0 个评论
    In this time of increased cost-cutting and bean-counting, documentation is usually the last thing that companies think about, despite the fact that is a critical element in any complex software or hardware design. The pervasive nature of embedded systems designs in our lives means much greater attention must be paid to rigorous and—at least short-term—expensive methodologies such as Agile systems development, which very much depend on an Agile-friendly documentation development process. In a recent article, James Grenning, one of the founding members of the Agile Alliance , pointed out that while in agile development working software is a more meaningful gauge of software development progress, that does not mean that there is no need for documentation. "Documents are often invaluable," he writes. "Those that are, must be produced. However, documentation is expensive to create and maintain so it is important to create only those documents you truly need." During the development process documentation provides individual engineers a reminder of what's been done and what is left to be accomplished. For a design team it provides a global view of the state of the system at any particular time and allows each member to see the place their particular piece plays within the whole of the design. At the later stages during testing, it provides the means to compare the operation of the completed system to the original design goals. For the end users, it is a guide to operating the system correctly and most efficiently. But despite the need for documentation in any development process focused on high quality and reliability, there are serious issues facing embedded developers as noted in several recent columns, including: " Incorrect documentation or none at all? " by Bill Schweber, and " Embedded design getting dumbed down ," by Jack Gannsle. In an article, "DITA and the death of technical documentation as we know it," Andrew J. Thomas suggests that the shift to a new documentation paradigm developed by IBM—called the Darwin Information Typing Architecture (DITA)—may resolve many of the problems faced by embedded developers. Because of the importance of documentation in almost any hardware or software design, I would like to hear from you about the techniques and tools you have developed to assure complete and accurate documentation at all stages of development.What do you think about such things as DITA, as a solution to the problem? What alternatives are you considering?  
相关资源