tag 标签: software engineering

相关博文
  • 热度 21
    2011-8-7 21:58
    1851 次阅读|
    0 个评论
    George Dinwiddie. a good friend, maintains a blog that focuses on software development. O ne of his postings particularly got my attention. In that blog post, George discusses the balance between people and process. He faults organizations that find the most productive developer, and then tries to clone him across the team by duplicating whatever processes he uses. I agree that even though this approach is seductive to some managers it's doomed to failure. But there's a more fundamental problem: without productivity metrics it's impossible to know if the team is getting better or worse. In my travels I find few firmware groups that measure anything related to performance. Or quality. There are lots of vague statements about "improving productivity/quality," and it's not unusual to hear a dictate from on-high to build a "best-in-class" development team. But those statements are meaningless without metrics that establish both where the group is now, and the desired outcome. Some teams will make a change, perhaps adopting a new agile approach, and then loudly broadcast their successes. But without numbers I'm skeptical. Is the result real? Measurable? Is it a momentary Hawthorne blip ? Developers will sometimes report a strong feeling that "things are better." That's swell. But it's not engineering. Engineering is the use of science, technology, skilled people, process, a body of knowledge – and measurements – to solve a problem. Engineers use metrics to gauge a parameter. The meter reads 2.5k ohms when nominal is 2k. That signal's rise time is twice what we designed for. The ISR runs in 83µsec, yet the interrupt comes every 70. We promised to deliver the product for $23 in quantities of 100k, and so have to engineer two bucks out of the cost of goods. Some groups get it right. I was accosted at the ESC by a VP who experienced a 40% schedule improvement by the use of code inspections and standards. He had baseline data to compare against. That's real engineering. When a developer or team lead reports a "sense" or a "feeling" that things are better, that's not an engineering assessment. It's religion. Metrics are tricky. People are smart and will do what is needed to maximize their return. I remember instituting productivity goals in an electronics manufacturing facility. The workers started cranking out more gear, but quality slumped. Adding a metric for that caused inventory to skyrocket as the techs just tossed broken boards in a pile, rather than repair them. Metrics are even more difficult in software engineering. Like an impressionistic painting they yield a fuzzy view. But data with error bands is far superior to no data at all.
  • 热度 16
    2011-4-11 16:55
    1555 次阅读|
    0 个评论
    Michael Linden alerted me to an article in Dr. Dobbs website wherein the author expounds on the hopes of a significant group of well-known people to define an epistemology of software engineering. Links are provided to other quite interesting articles that describe the need to develop a theory of software engineering.   A number of prestigious signatories have created the Software Engineering Method and Theory ( SEMAT ) community , an attempt to figure out the fundamentals of our profession. To quote from the Dr. Dobbs article:   "Software engineering is gravely hampered today by immature practices. Specific problems include:   * The prevalence of fads more typical of fashion industry than of an engineering discipline. * The lack of a sound, widely accepted theoretical basis. * The huge number of methods and method variants, with differences little understood and artificially magnified. * The lack of credible experimental evaluation and validation.   * The split between industry practice and academic research.   "We support a process to re-found software engineering based on a solid theory, proven principles, and best practices that:   * Includes a kernel of widely-agreed elements, extensible for specific uses. * Addresses both technology and people issues. * Is supported by industry, academia, researchers and users.   * Supports extension in the face of changing requirements and technology. "   There has long been a debate about this field. Is it engineering? Art? One would think that if the former, there would be some principles grounded in the sciences. EEs rely on basic, provable, physics like E=IR and Maxwell's equations.   EEs can compute solutions and prove correctness. That's not generally true for software. SEMAT wants to change the rules and push real engineering into the software environment. I'm hugely supportive of that goal. And skeptical, as well.   Perhaps there is some theoretical underpinnings to software engineering, some science from which we can derive the one correct way to write code. Today all we have are aphorisms and ideas, some grounded in anecdotal evidence, some cautions of approaches to avoid.   Either there is no basic science we can draw upon, or we're akin to 15 th century alchemists, practicing our art while awaiting the Newtons, Boyles and Einsteins to lift the veil of ignorance. I hope it's the latter but fear it's the former.   We do have an important body of knowledge of practices that often work and those that don't. None are grounded in any sort of theory and all seem as arbitrary as the rules behind quantum mechanics.   The SEMAT folks are creating a "kernel" of knowledge from which their research will proceed. The kernel was formed from examining many different software methodologies and finding the common elements.   That's like excavating fragments of bones to piece together a complete dinosaur skeleton, and is quite worthwhile. I doubt, though, that the approach can lead to an approach to engineering that has the mathematical rigor that forms the core of most other engineering fields.
  • 热度 14
    2011-4-2 11:30
    1389 次阅读|
    0 个评论
    Alert reader Bob Paddock sent a link to Tom DeMarco's article in IEEE Software: "Software Engineering: An Idea Whose Time Has Come and Gone". Though I disagree with most of what Mr. DeMarco says, it is an interesting and thought-provoking piece.   In it, Mr. DeMarco questions whether Software Engineering is an idea whose time has come and gone. He states that, even though he wrote a book on software metrics back in 1982, he no longer believes that collecting metrics is important when building software.   He further seems to say that the ideas behind software engineering just aren't very important. Some projects have succeeded despite a lack of traditional software engineering processes; he cites GoogleEarth and Wikipedia as examples. ( Google is notoriously close-mouthed so I wonder what Mr. DeMarco knows about their development methods. )   GoogleEarth and Wikipedia are his poster-children for software that is "transformational." Revolutionary. And so they are. But he divides the software world into two camps: programs that offer little value ( those that cost, say, $1m to create and return $1.1m in value ), and the transformational systems that reap rewards that would make even a Goldman Sachs trader's mouth water.   The former requires massive control—software engineering—since the margins are so small. But he muses that perhaps the problem is that we bother building these systems when transformational ones are so much more lucrative.   That argument is akin to wondering why anyone would hold a job when the lottery offers monster payouts. The vast number of workaday systems we build, those whose names never get verb-ized or otherwise noticed by the public, have substantial returns. Lottery-like? Nope. But they are the very fabric of the survival and viability of our companies.   That SCADA system you produce generates the revenue that feeds an army of employees and their dependents, and it builds corporate equity which translates into wealth for the stockholders.   For every iPod or GoogleMaps smash hit there are thousands of products of lesser but important value. Most don't result in a couple of college kids becoming overnight billionaires, but they do offer the steady returns upon which our economy depends.   Wasn't the pursuit of dot-com and mortgage jackpots the source of our two most recent recessions?   What do you think? Are your products making the killer returns the press loves to cover?
  • 热度 24
    2011-3-22 16:48
    2993 次阅读|
    0 个评论
    This is the continuation of my article, " How I Test Software ". We have a lot to cover this time, so let's get right to it. I'll begin with some topics that don't fit neatly into the mainstream flow. Test whose software? As with my previous article, " How I Test Software ", I need to make clear that the chosen pronoun in the title is the right one. This is not a tutorial on software testing methodologies. I'm describing the way I test my software. Does this mean that I only write software solo and for my own use? Not at all. I've worked on many large projects. But when I do, I try to carve out a piece that I can get my arms around and develop either alone or with one or two like-minded compadres. I've said before that I usually find myself working with folks that don't develop software the same way I do. I used to think I was all alone out here, but you readers disabused me of that notion. Even so, my chances of finding a whole project team who work the way I do are slim and none. Which is why I try to carve out a one-person-sized piece. What's that you say? Your project is too complicated to be divided up? Have you never heard of modular programming? Information hiding? Refactoring? Show me a project whose software can't be divided up, and I'll show you a project that's doomed to failure. The classical waterfall method has a box called "Code and Unit Test." If it makes you feel better, you can put my kind of testing into that pigeonhole. But it's more than that, really. There's a good reason why I want to test my software, my way: my standards are higher than "theirs." Remember my discussion about quality? To some folks, the term "quality" has come to mean "just good enough to keep from getting sued." I expect more than that from my software. I don't think it's enough that it works well enough for the company to get their fee. I think it should not only run, but give the right answer. I'm funny that way. It doesn't always endear me to my managers, who sometimes just want to ship anything, whether it works or not. Sometimes I want to keep testing when they wish I wouldn't. But regardless of the quality of the rest of the software, I try very hard to make mine bulletproof. Call it pride if you like or even hubris. I'm still going to test thoroughly. Oddly enough, the reason I spend so much time testing is, I'm lazy. I truly hate to debug; I hate to single-step; I hate to run hand checks. But there's something I hate even more, and that's having to come back again, later, and do it all over again. The reason I test so thoroughly is, I don't ever want to come back this way again. Desk checking All my old software engineering textbooks used to say that the most effective method to reduce software errors was by desk checking. Personally, I've never found it to be very effective. I assume you know the theory of desk checking. When a given programmer is working on a block of code, he may have been looking at the code so long, he doesn't see the obvious errors staring him in the face. Forest for the trees, and all that stuff. A fresh pair of eyes might see those errors that the programmer has become blind to. Maybe it's so, but I find that the folks that I'd trust to desk check my code, and do it diligently, are much too much in demand to have time to do it. And what good is it to lean on a lesser light? If I have to spend all day explaining the code to him, what good is that? But there's another, even more obvious reason to skip desk checking: it's become an anachronism. Desk checking may have been an effective strategy in one of those projects where computer time was so precious, no one was allowed to actually compile their code. But it makes no sense at all, today. Face it, unless the checker is a savant who can solve complicated math equations in his head, he's not going to know whether the math is being done right or not. At best, he can only find syntax errors and typos. But today, there's a much more effective way to find those errors: let the compiler find them for you. When it does, you'll find yourself back in the editor with the cursor blinking at the location of the error. And it will do it really fast. With tools like that, who needs the fellow in the cubie next door? Single stepping I have to admit it: I'm a single-stepping junkie. When I'm testing my software, I tend to spend a lot of time in the source-level debugger. I will single-step every single line of code. I will check the result of every single computation. If I have to come back later and test again, I will, many times, do the single-stepping thing all over again. My pal Jim Adams says I'm silly to do that. If a certain statement did the arithmetic correctly the first time, he argues, it's not going to do it wrong later. He's willing to grant me the license to single step one time through the code, but never thereafter. I suppose he's right, but I still like to do it anyway. It's sort of like compiling the null program. I know it's going to work (at least, it had better!), but I like to do it anyway, because it puts me in the frame of mind to expect success rather than failure. Not all issues associated with software development are cold, hard, rational facts. Some things revolve around one's mindset. When I single-step into the code, I find myself reassured that the software is still working, the CPU still remembers how to add two numbers, and the compiler hasn't suddenly started generating bad instructions. Hey, remember: I've worked with embedded systems for a long time. I've seen CPU's that didn't execute the instructions right. I've seen some that did for a time, but then stopped doing it right. I've seen EPROM-based programs whose bits of 1's and 0's seemed to include a few ½ 's. I find that I work more effectively once I've verified that the universe didn't somehow become broken when I wasn't looking. If that takes a little extra time, so be it. Random-number testing This point is one dear to my heart. In the past, I've worked with folks who like to test their software by giving it random inputs. The idea is, you set the unit under test (UUT) up inside a giant loop and call it a few million times, with input numbers generated randomly (though perhaps limited to an acceptable range). If the software doesn't crash, they argue, it proves that it's correct. No it doesn't. It only proves that if the software can run without crashing once, it can do it a million times. But did it get the right answer? How can you know? Unless you're going to try to tell me that your test driver also computes the expected output variables, and verifies them, you haven't proven anything. You've only wasted a lot of clock cycles. Anyhow, why the randomness? Do you imagine that the correctness of your module on a given call depends on what inputs it had on the other 999,999 calls? If you want to test the UUT with a range of inputs, you can do that. But why not just do them in sequence? As in 0.001, 0.002, 0.003, etc.? Does anyone seriously believe that shuffling the order of the inputs is going to make the test more rigorous? There's another reason it doesn't work. As most of us know, if a given software module is going to fail, it will fail at the boundaries. Some internal value is equal to zero, for example. But zero is the one value you will never get from a random-number generator (RNG). Most such generators work by doing integer arithmetic that causes an overflow and then capturing the lower-order bits. For example, the power residue method works by multiplying the last integer by some magic number, and taking the lower-order bits of the result. In such a RNG, you will never see an integer value of zero. Because if it ever is equal to zero, it will stay zero for the rest of the run. Careful design of the RNG can eliminate this problem, but don't count on it. Try it yourself. Run the RNG in your favorite compiler and see how often you get a floating point value of 0.000000000000e000. End result? The one set of values that are most likely to cause trouble — that is, the values on the boundaries — is the one set you can never have. If you're determined to test software over a range of inputs, there's a much better way to do that, which I'll show you in a moment. Validation? All my career, I've had people tell me their software was "validated." On rare occasions, it actually is, but usually not. In my experience, the claim is often used to intimidate. It's like saying, "Who do you think you are, to question my software, when the entire U.S. Army has already signed off on it?" Usually, the real situation is, the fellow showed his Army customer one of his outputs, and the customer said, "Yep, it looks good to me." I've worked with many "validated" systems like that and found hundreds of egregious errors in them. Even so, software absolutely can be validated and should be. But trust me, it's not easy. A lot of my projects involve dynamic simulations. I'm currently developing a simulator for a trip to the Moon. How do I validate this program? I sure can't actually launch a spacecraft and verify that it goes where the program said it would. There's only one obvious way to validate such software, and that's to compare its results with a different simulation, already known to be validated itself. Now, when I say compare the results, I mean really compare them. They should agree exactly, to within the limits of floating-point arithmetic. If they don't agree, we have to go digging to find out why not. In the case of simulations, the reasons can be subtle. Like slightly different values for the mass of the Earth. Or its J2 and J4 gravity coefficients. Or the mass of its atmosphere. Or the mass ratio of the Earth/Moon system. You can't even begin to run your validation tests until you've made sure all those values are identical. But if your golden standard is someone else's proprietary simulation, good luck getting them to tell you the constants they use. Even if all the constants are exactly equal and all the initial conditions are identical, it's still not always easy to tell that the simulations agree. It's not enough to just look at the end conditions. Two simulations only agree if their computed trajectory has the same state values, all the way from initial to final time. Now, a dynamic simulation will typically output its state at selected time steps — but selected by the numerical integrator way down inside, not the user. If the two simulations are putting out their data at different time hacks, how can we be sure they're agreeing? Of course, anything is possible if you work at it hard enough, and this kind of validation can be done. But, as I said, it's not easy, and not for the faint-hearted. Fortunately, there's an easier and more elegant way. The trick is to separate the numerical integration scheme from the rest of the simulation. You can test the numerical integrator with known sets of differential equations, that have known solutions. If it gets those same solutions, within the limits of floating-point arithmetic, it must be working properly. Now look at the model being simulated. Instead of comparing it to someone else's model, compare its physics with the real world. Make sure that, given a certain state of the system, you can calculate the derivatives of the states and verify that they're what the physics says they should be. Now you're done, because if you've already verified the integrator, you can be sure that it will integrate the equations of motion properly and get you to the right end point. Is this true validation? I honestly don't know, but it works for me.  
  • 热度 23
    2011-3-18 11:34
    2197 次阅读|
    1 个评论
    Do you get the impression from my previous column that I'm not a big fan of mediocrity? You're right.   I'm old enough to remember the days when there were no such things as franchise motels. Your average "motel" was more likely to be a collection of little cottages, much like an RV park of today. You paid your money, you got to sleep in one of the cottages, and the term "quality" was interpreted as, no roaches crawling across your nose. Maid service? Surely you jest.   Then along came the Interstate Highways. And with the highways, this wonderful franchise known as the Holiday Inn . It was a great relief to us travelers, and my family and I used them with gratitude.   Over time, though, other competing motels came along, and as our standards for excellence got more stringent, the Holiday Inns didn't seem as appealing as before.   Other travelers still love the Holiday Inns, though, and I think I've figured out why. Holiday Inn doesn't pretend to offer luxury.   What they offer is a uniform level of mediocrity . It's not luxurious, but it's not bad, either. Staying there, you won't feel coddled in the lap of luxury, but you won't find that proverbial roach on your nose, either.   My son used to play youth hockey, and he was very good at it. Each year, the sponsoring YMCA held its annual banquet. Each year, the coaches would nominate outstanding players to be selected for the All-Star team. Each selected kid got a huge trophy and a nice jacket with a big 'Y' on it. It was a matter of considerable pride.   One year, the 'Y" announced that there would be no more all-star awards. Some of the parents, it seems, were complaining. Their son, little Johnny, hadn't been selected, and it bwoke his widdle heart. So rather than disappointed little Johnny, the Y decided to kill the All-Star program completely, thereby disappointing EVERYBODY .   In our city, we told the Y to get stuffed, formed our own hockey association, and continued right on with the All-Star program. Even so, the pressure from disappointed parents has continued unabated, and my son – now a coach – is continually frustrated by the attitude that excellence doesn't matter. Hey, the parents argue, it's just a game. And who cares if we win or lose?   It's this attitude that has all kids getting "participation trophies," and school kids getting grades like "acceptable" rather than numeric scores. Or, better yet, no scores at all. If your doctor graduated from Harvard Medical School, do you feel better knowing that he earned a grade of "pass"? Hey, at least he wasn't stressed, and had better group cohesion. Isn't that special?   I call this trend " The Glorification of Mediocrity ." Moreover, I detest it, as much when it's related to software, as when it's related to hockey or Harvard.   The SEI and CMM By far the most successful efforts to deal with the Software Crisis came from Carnegie Mellon's Software Engineering Institute (SEI). Over time, SEI developed a method for evaluating the maturity of an organization's software process. They further developed a classification system, which evolved into the SEI Capability Maturity Model (CMM). The model defines five levels of software maturity, ranging from Level 1 (Initial – you get this for being alive) to Level 5 (Optimizing). See more about all of this at http://en.wikipedia.org/wiki/Capability_Maturity_Model At this point, DoD and most other organizations require all contractors to have CMM Level 3 or better. You get an SEI rating by scheduling an evaluation. An SEI evaluation team will come in, ask questions, review the organizations established processes, and interview people from top management down to the individual developers. Based on their findings, you get an SEI rating.     Quality revisited Several years ago, my company was going through the "quality circle" fad. They got so into the idea that they decided to vie for an annual award for corporate emphasis on quality. The management presented a questionnaire to all the project leaders, which included the following question:   "Do you even feel that the quality of your product has ever been comprimized by production schedules or budget limitations?"   A colleague answered, "Of course. It happens every day. Every day, I have to make decisions based on time and budget constraints, and the decisions often impact quality."   There was a long silence, following which, one of the Vice Presidents patiently explained, "See, John, the problem is that you don't understand the Ajax definition of quality."   He went on to explain that, in the Ajax Corporation, quality should extend only to that required by the contract. In other words, the product was a quality product if it passed the customer's acceptance tests, without getting us disqualified as nonresponsive.   Anything more than that, said the VP, was a waste of company profits.   I need a hero I recently worked for a company called Spectrum Astro, Inc. We made satellites, including the software that went into their flight computers. I couldn't help but notice that previous software products seemed to have an exceptional record for quality, with no schedule slips or cost overruns.   I asked my boss, "In the history of the company, has their ever been a case of our software failing in orbit?"   He replied, "Not one."   How do you produce software like that? I can tell you, it's not by giving pass/fail grades.   I asked another colleague how they had done it. He said, "We had heroes."   He went on to explain that, while the flight software group had many people with the usual distribution of talents, they also had a handful of heroes; people who combined incredibly good technical skills with hard work, and a willingness to work ridiculous schedules and pull miracles out of the hat, to get the job done on schedule. The teams were small enough so that one hero could make a big difference, in both group attitudes and delivered results.   SEI vs Heroes Shortly after I arrived at Spectrum Astro, we sought to establish a Software Engineering Institute (SEI) rating of Level 3. We got it. As part of the process, folks came in to give us tutorials on how to earn, keep, and improve our rating, and how to follow the CMM guidelines.   The course was great, but I was dismayed to hear the SEI view of heroes. They didn't like them. At all. The presenter, in fact, had some rather nasty things to say about heroes. This surprised me.   In retrospect, I think I think I understand what they mean by the term, "Hero." It's not the same thing I mean, or what my colleague meant. SEI apparently uses the term synonymously with "wizard" or "hacker." If that's the case, we agree. I, too, am quick to show wizards to the door. Fortunately, wizards are easy to deal with. They have inordantly high evaluations of themselves, so they like the title, wizard. You ask the prospective hire, "Are you a wizard?" If he says yes, don't hire him.   But the folks who worked long hours to bring in all those projects on time, on budget, and error free, were the antitheses of wizards. They were highly accomplished, professional and experienced engineers, who knew what had to be done to meet their commitments, and did it. They were, in fact, master craftsmen.   I'm concerned that SEI's attitude concerning heroes is just another example of the Glorification of Mediocrity.   Is software quality not something to be desired? Are we willing to settle for mediocrity, just to get the right evaluation or get a system to completion on schedule? Maybe so, but I still find it depressing.   But I think that you don't get superior quality from mediocre people following mediocre processes. You're going to need creative, inventive, and capable people, willing to go that extra distance to achieve excellence. You're going to need artisans. You're going to need heroes.