tag 标签: bugs

相关博文
  • 热度 22
    2015-6-25 21:57
    1568 次阅读|
    0 个评论
    Personally, I think that for most products we should automate as much testing as possible. The old adage still holds: If you have to do something once, do it. If you have to do it twice, automate it. But some things are tough to delegate to a machine. Watching and interacting with a UI is an example.   But some people are doing very clever things. Some get LabVIEW with its vision module. They aim a camera at a control panel or even a screen and use LabVIEW to dissect the visual field, returning elements as text items. It’s possible to sometimes close the testing loop this way.   Bruno didn’t mention holding tests to objective standards. We have no idea how to figure how many tests cases are needed, but we can compute the minimum. Cyclomatic complexity gives us a hard number: If you don’t have that number of tests, then, for sure, the system is not being completely tested.   Testing is a hugely important activity. But it suffers from three flaws: - Outside of agile shops testing takes place at the end of the project. The project is late, of course, so what gets cut? - It doesn’t work. Plenty of studies have shown that, absent code coverage metrics, the average test regime only exercises half the code. Exception handlers and special cases are notoriously difficult to test. Deeply-nested Ifs lead to mind-boggling numbers of testing permutations. - Test is the wrong way to find bugs.   To elaborate on the last point, we have to first “man up” and admit that we are flawed firmware developers who will produce code with all sorts of defects. That’s just the nature of this activity. Given that we know we will inject bugs into the code (and maybe a lot of them), and that testing is flawed, wise developers will employ a wide range of strategies to beat back the bugs.   I like to think of generating high-quality software in terms of a series of filters. No one filter will have 100% efficacy; each will screen different percentages of problems. Used together the net effect is to remove essentially all of the bad gunk while letting only the purest and most pristine code out the door.   This is not a new idea. Capers Jones lists over 100 filtering steps (and analyzes the effectiveness of each) in The Economics of Software Quality. He makes it clear that no team needs to use all of these, of course; some, for instance, are for dealing with data bases; others are for web page design.   The compiler is a filter we all use. It won’t generate an object file when there are bugs that manifest as syntax errors. Unfortunately, it will happily pass a binary to the linker even if there are hundreds of warning messages. It’s up to us to be disciplined enough to have a near-zero tolerance for warnings.   (I remember using the RALPH Fortran compiler in 1971 on a Univac 1108. If there were more than 50 errors or warnings in your card deck it would abort, print a picture of Alfred E. Neumann with the caption “This man never worries, but from the look of your program, you should.”)   Do you use Lint? You should. Though annoying to set up, Lint will catch big classes of errors. It’s another filter stage.   Static Analysis is becoming more important. The tools cost too much, but they’re cheaper than engineering labor. Another filter.   Then there’s the most effective filter ever devised for eliminating bugs. Study after study shows it to be the cheapest and most powerful way to generate decent code: reviews. Decent code reviews should eliminate 70% of the defects. The best teams eliminate over 90% with reviews.   Test, of course, is yet another filter. Or rather, another series of filters. Unit test, integration test, black-box test, regression test, each serves to clean up the code.   But these filters won’t work well unless you monitor their effectiveness. What percentage of the bugs sneak through each stage? If test is finding 90% then there’s something seriously wrong with the earlier filters, and corrective action must be taken.   Engineering is, after all, about the numbers.   What filters do you use?
  • 热度 21
    2015-3-21 11:23
    1775 次阅读|
    0 个评论
    There are few things more dispiriting to an engineer than pouring their heart, sweat and tears into a project only to have it fail. Failure can and does provide insights and growth experiences to those involved but the loss of time and effort can strike a devastating blow. There are many reasons that an embedded systems project can fail but there are seven key indicators that a project is dying a slow and silent death.   #7 – Team turnover Every company experiences employee or contractor turn over but excessive turnover of key personal can be a leading indicator that a project is doomed for failure. There are many reasons why turnover can have a detrimental effect on the project. First, it has a psychological effect on other team members that can decrease productivity. Second, the loss of key personal can result in historical and critical information being lost forever, which will slow down the development. Finally, replacing team members requires training and bringing up to speed a new team member. This can be a distraction that takes others away from development work, with the end result an increase in development costs and delivery timeframe.   #6 – Go stop go syndrome There is an old saying that children are taught; “Don’t cry wolf.” The saying is a warning to not raise false alarms. This warning is ignored in projects that have a “GO! STOP! GO!” cycle. A manager, client, or some other entity pushes his team hard, claiming that the project has to get out the door by a certain date. Developers work weekends and put in extra effort. Then, just as quickly as the big push came the project is stopped dead in its tracks. Months later it is once again an emergency. “Hurry we have to ship by X!" And the same thing happens again.   The repeated urgency followed by stopping the project that is later urgently started again has a psychological effect on the development team. The developers come to no longer believe that there is any urgency. In fact, they start to get the mindset that this project isn’t a serious project and that it will very shortly be stopped again, so why put in any effort at all?   Watch out for the project that cries wolf!   #5 – A perfectionist attitude One of my favorite phrases concerning engineers is “There comes a time in every project when you must shoot the engineers and start production." Many engineers have a perfectionist attitude. The problem with this attitude is that it is impossible to build the perfect system, write the perfect code, or launch the product at the perfect time. Perfectionism is always elusive and if perfectionism is part of the culture it is a sign that a product may be re-engineered out of business.   The right mindset isn’t perfectionism but success. What is the minimum success criterion in order to successfully launch the product? Set the criteria for success and launch the product once that is achieved. A boot-loader can later be used to add features and resolve any minor bugs.   #4 – Accelerated timetable It seems counter-intuitive, but to develop an embedded system rapidly a team actually needs to go slow. Working on an accelerated timetable results in decreased performance due to stress and, more importantly, a higher likelihood that mistakes will be made. Mistakes will directly influence the number of bugs that then increase test time and rework time.   Another issue is that when developers are rushing and trying to meet an accelerated timetable, they cut corners. Code doesn’t get commented. Design documents such as architecture and flowcharts aren’t created. Instead, design occurs on the fly in the programmer's mind. Going slower and doing things right will get to the end solution faster.   #3 – Poorly architected software Embedded software is the blood of the embedded system; without it nothing works. Poorly architected software is a sure sign of failure. The architecture of an embedded system needs to have flexibility for growth. It needs to have hooks for testing, debugging, and logging. A poorly architected system will lead to a poor implementation and that will result in buggy, unmanageable software that is doomed to spend its days being debugged until the project finally dies.   #2 – Putting the cart before the horse Developing a new product is an exciting endeavor. There is a lot to do and companies are usually in a hurry to get from concept to production. This hurry can be extremely dangerous, especially when production decisions start to get ahead of themselves.   A great example is when the product's mechanical design or look and feel is being used to drive its electrical requirements. Before a working electrical and software prototype is ever proven, production tools get kicked off. In these cases it always seems that the circuit board doesn’t check out, adjustments need to be made, and – oops -- production plastic tools no longer fit the circuit board. The horse of the system needs to be pulling the cart. Projects that rush and try to pull things in parallel too quickly usually end up taking longer and costing more due to revisions.   #1 – Scope creep Every project has scope creep, but the degree of the scope creep can be the determining factor as to whether the project will succeed or fail. One of the most dangerous aspects of scope creep is that it can be insidious. The addition of a simple sensor to the board one day, a few other additions s a few months later, these seem completely harmless. But they can be deadly.   The biggest problem with scope creep is that the changes are usually subtle. At first glance a change looks like it’s just a few day's work. But with each addition the system's complexity rises. Complex systems require more testing and possibly more debugging. The scope creep can thus change the system to such a degree over time that the original software architecture and design become obsolete or even the incorrect solution! The end result is a project that is far outside its budget and behind its delivery date, with little to no end in sight.   Conclusion There are no guarantees for success when developing a new embedded system and there are many factors that contribute its success or failure. These are what I have identified the top seven silent project killers. These are subtle clues that can indicate your project is on a slow and silent death trajectory.   What other indicators do you think might be signs that a project may never reach completion? Do any projects you are working on right now exhibit more than one of these listed here?   Jacob Beningo is a Certified Software Development Professional (CSDP) whose expertise is in embedded software. He works with companies to decrease costs and time to market while maintaining a quality and robust product. He is an avid tweeter, a tip and trick guru, a homebrew connoisseur and a fan of pineapple!
  • 热度 27
    2014-12-10 19:17
    2130 次阅读|
    0 个评论
    I recently had an experience with two awesomely beautiful designs. To my disappointment, I also encountered some bugs, which caused me to wish that formal verification tools had been used as part of the development process.   The Apple of my eye Last month, I persuaded my family to abandon their Windows machines due to too many bugs -- more bugs than I care to list throughout years of use. We purchased the newly released 27-inch iMac with a 5k Retina display. This is a beautiful computer with a strikingly sharp image powered by 14.7 million pixels, and we were all excited to get our hands on the system.     However, it didn't take long for me to experience the first set of bugs. I needed to use "Migration Assistant" to transfer files from my MacBook Air laptop to the new iMac. This should have been a simple task, but it proved to be much more difficult than it should have been. Here's what transpired:   - I connected my MacBook Air to the iMac using a Thunderbolt cable and did all the steps instructed by Migration Assistant. The iMac didn't see the fast Thunderbolt connection. It was using WiFi to transfer data, and it projected a 76-hour time to do so. After struggling for a while and finding no answer with iMac itself, I resorted to web forums and found the trick: I had to boot my MacBook Air as an external hard disk for the iMac to see the Thunderbolt connection. If this is required, why doesn't Apple provide instructions? - At first, the migration process seemed to go smoothly. Eventually, the system reported that it would take less than a minute to finish the process. I waited and waited and waited. Nothing changed after 30 minutes, and it seemed like it was stuck. A new web search revealed many angry users who experienced the same problem as early as the beginning of the year. Many lost all their migration work after waiting hours. Talk about the anger and frustration. Apparently, this is caused by a few incompatible files, and it raises a simple question: Why doesn't Apple provide the information up front instead of waiting until the end of the process? - Just as I was about to kill my process, the computer informed me about the incompatible files. The migration process gave every indication of having completed -- but not yet. A key motivation for buying the iMac was to move all my photos and videos to this beautiful machine. When I checked on the results, however, those files had not migrated. I'm not sure why. I ended up doing a manual copy of files between the two machines. It was past midnight by then, and I had been doing this for more than four hours. Frustrated and tired, I almost wanted to give up on the new machine and return it to Apple.   Apple is well known for its ease of use, yet it couldn't get this right. Data migration is one of the first things a user might be expected to do after buying a new machine, so why was my experience so difficult and frustrating? What makes this even more frustrating is the fact that users reported all these issues months ago, yet there are still bugs with the recent release of the O/S X Yosemite operating system.   Come fly with me I traveled to Shanghai on Air Canada through Vancouver in early November. Much to my pleasant surprise, Air Canada has started using Boeing 787 Dreamliners on this route. The aircraft is brand new and beautiful. The interior is roomy and comfortable. At first glance, everything appeared to be wonderful.     Unfortunately, it didn't take long before some bugs started to appear. - As we were about to take off, our captain reported that there was a mechanical issue with the aircraft, and we needed to wait for the repair crew to fix it. No detailed explanation was given about the issue. Luckily, the airplane managed to depart 25 minutes later. Presumably, the issue was fixed.   - The Boeing 787 has many new and improved features, including an electrochromic dimmable system intended to be an improvement to traditional window shades. Users can control how much light passes through the window by changing the tint of the glass. This seems to be a nice feature, though I soon discovered that its designer(s) hadn't thought through all the possibilities. The darkest window setting will still allow plenty of light to come through, especially if the sun itself is right outside the window. Throughout the flight from Vancouver to Shanghai, we were chasing the sun, resulting in a big, bright glare through the window. When we complained about it, the flight attendant taped a piece of paper to the window. It is ironic that such a new, "improved" feature turned out to be so flawed. I wonder how long it will take for Air Canada to put the original window shades back. They may not be fancy, but they work.   - Halfway through the flight, the in-flight entertainment system began to have issues. After several attempts to reset it, the captain reported that the problem couldn't be solved.   It makes me wonder Why do great designs like those mentioned here still have bugs? There could be any number of reasons, but I think one problem is that the engineers who design and test systems cannot think about all possible test scenarios. The tools used for testing depend on humans to come up with test scenarios, but the limitations of the human brain mean the results of the testing will not be complete.   This makes me appreciate the beauty of formal verification. It removes the requirement for designers and verification engineers to come up with all the possible scenarios that need to be tested. By automatically enumerating the entire search space, corner cases that humans may not consider can still be tested. If formal technology could be used for system or software testing, maybe there would be no bugs left in our world.   There is still a long way to go to reach this goal, but semiconductor companies large and small are embracing formal technology for chip design and verification at record speed. Maybe, just maybe, we can dream about a day in the not-so-distant future when systems don't just look good, but also work beautifully without any glitches.   Jin Zhang is senior director of marketing at Oski Technology.
  • 热度 21
    2011-11-10 14:20
    1889 次阅读|
    0 个评论
    "The definition of insanity is doing the same thing over and over, and expecting different results." I have seen three unrelated references to this oh-so-clever quote these past few weeks. One time this was cited in relation to some political situation, once in the context of the economy, and the third time by a non-technical person about some new, fairly technical product's features. My frustration with this maxim is simply this: I don't think it's true. In fact, I believe it is often not the case. Many engineering advances have been made by trying the same actions repeatedly, until things finally "click". Often, you may be doing the same thing over and over, but some unknown underlying parameter is different or is affecting your results (noise, contamination, orientation), and does not become apparent—and therefore understood—until more detailed investigations are completed. Beyond this sort of innovation, much of the debug process consists of running the same over and over, while looking for different results in bit error rate, noise performance, subtle and intermittent bugs, transients, and other nasty problems. These can often only be seen or trapped by repeated, identical test runs which allow the rare, outlier problems to become visible. These problems are often at the extremes of the Gaussian curve, so to speak. There's also quantum physics to consider. Since the actions of the atomic and sub-atomic particles are guided by wave fun and probabilities, a single-shot test is not at all definitive. You have to run through millions of identical collisions to get a highly unlikely event to occur (hello, neutrino!). So with apologies (or maybe not) to Dr. Einstein, I say : sorry, but use of repetition followed by different expectations are not signs of insanity, but instead may be signs of diligent investigation in science and engineering. Have you ever had this experience—or frustration—in your work?  
  • 热度 22
    2011-3-17 17:40
    1847 次阅读|
    0 个评论
    In December 2009, my fellow columnist Michael Barr published the first column in his Barr Code series, entitled "Bug-killing standards for firmware coding." In it, he recommended ten "coding rules you can follow to reduce or eliminate certain types of firmware bugs." Needless to say, the column elicited a torrent of comments. Some of these comments actually considered the merits of his recommendations. However, many comments wandered off onto the subject of brace placement, an issue Michael didn't address explicitly in his article. (He did recommend using braces even when they are optional; however, he didn't recommend a preferred style for placing them.)   The flow of the discussion was quite typical of what I've seen or heard in other such discussions. In this case, the opening salvo came when one commenter asserted that using the "Allman" brace placement style caused bugs that using the "One True" brace placement style cured. Another commenter replied that, despite the previous claim, the Allman style is easier to read. And so it went.   Let me first say that I enjoy talking about programming style—not just about brace placement, but about style in general. After more than 30 years in the computing business, I've heard many of the arguments already, but I still hear new ideas often enough to keep me coming back for more.   On the other hand, I'm rather dismayed that nearly all programmers who participate in such discussions apparently do so with the tacit assumption that there's no objective basis—no science—that we can use to discern a preferred style. The discussions traffic almost entirely in anecdotes, personal observations and largely unsubstantiated assertions. I can't recall the last time I heard or read anyone suggest that we ought to measure how style choices affect code quality, let alone how to conduct an appropriate experiment.   For example, advocates of the One True style claim that it's better than the other styles at revealing extraneous semicolons after conditionals. That is, it's easier to spot and avoid the spurious semicolon in:   while (condition); { statement(s) }   than it is in:   while (condition); { statement(s) }   This seems plausible, but I'd like to know how often this problem actually arises. The answer I'm looking for isn't "Lots." It's a number indicating defects per unit of code. I'd like to see a number for each brace placement style so I can assess how effectively each style reveals the error.   Moreover, as another commenter indicated, using static analyzers might be a more effective strategy for detecting this error, so much so that it renders moot any advantage that one style has over another in this regard. A few stats might tell us if this is really so.   Stray semicolons are probably not the only coding gaffes whose presence or absence in code might suggest an advantage of one brace placement style over another. Wouldn't it be nice to see a fairly comprehensive catalog of such errors along with statistics indicating the relative effectiveness of each style at revealing each error?   Lest the One True style proponents think I'm picking only on them, I'll note that proponents of the Allman style claim that it's easier to read and is better at exposing mismatching braces. To them I ask, "By what measure?"   How do we find this evidence? Before we start thinking about how to produce it, we should begin by asking, "Does it already exist?" I've searched the web for studies of the relative effectiveness of brace placement styles, but found nothing. I've found articles on other aspects of coding style, such as indenting depth (two to four spaces seems to be best at improving comprehension), but nothing on brace (or begin-end pair) placement. But, just because I haven't found any research, that doesn't mean it doesn't exist. So I have this request:   If you know of quantitative research results on the relative effectiveness of brace placement styles at improving code quality, please write to me and tell me where to find them.   (Brace placement is certainly not the only style controversy that could benefit from more rigorous analysis, but I'd like to limit the scope of this social experiment to just brace placement for now.)   I'm skeptical that such findings exist, but I'd be happy to be wrong on this. If they exist, why do so few people cite them? If they don't, why doesn't anyone seem to care, or even notice?