tag 标签: code collaborator

相关博文
  • 热度 11
    2011-6-16 13:33
    1523 次阅读|
    0 个评论
    I get a lot of links to videos every day. Regardless of the video sites they are on, I do pretty much the same: click, delete. While I'm sure they are funny or interesting, it just takes too much time to sit through that five minute (times dozens) gem.   But there are a few exceptions, and one that caught my attention is a short presentation from the folks at SmartBear Software. These are the folks behind CodeCollaborator, a quite wonderful software package that eases the painful process of code reviews. This presentation shows how the latest version can also be used to perform reviews of non-textual files, like schematics.   Back in the days of the steam engine, when I was a young engineer, PCBs were laid out by hand, using black tape on Mylar to position the PCB tracks. The cost to produce a board was astronomical, so every schematic was subject to a peer review designed to find many of the most common sorts of problems. Things are roughly similar today, since stratospheric IC fab costs mandate both reviews and expensive simulations. But way back then, as now, there were no real tools around to facilitate the review.   Till now.   CodeCollaborator is now, in my view, a must-have tool for hardware teams. I've recommended it in the past for firmware developers, but its new ability to help folks review and annotate any document is profoundly important and takes the tool far beyond its previous niche in the software group.   If you've worked on a Word document using multiple reviewers with the track-changes feature you know how efficient it is to use the annotations to edit and correct a file, and to insure that the changes are both accurate and appropriate. CodeCollaborator has long offered such capability for text files, like source code. But the ability to do the same for schematics and other files raises the notion of collaborative review to a new level.   Check out the video; it's pretty impressive.
  • 热度 7
    2011-6-9 17:55
    1567 次阅读|
    0 个评论
    Regular readers know I'm a big fan of code inspections. Overwhelming evidence and common sense has shown that two heads are better than one at finding problems, and adding another team member or two yields even better code. But inspections suck. Few developers really enjoy them. I sure don't, but we're professionals hired to build systems as efficiently and as perfectly as possible. A doctor might not enjoy giving prostate exams much, but to avoid such tests just out of personal distaste is both unprofessional and unacceptable. The same goes for the unsavory bits of our work. Because of this common aversion it's not uncommon for a team that starts with all of the best intentions to slowly find reasons to avoid reviewing code; that nearly always eventually results in inspections becoming completely abandoned. Because of this I feel inspections are one of the few areas where Genghis Khan software management is appropriate. A strong leader insists on their continued use. That person both supplies the resources needed and audits the process to insure compliance. But you don't want management on the inspection team ( developers tend to avoid pointing out errors when the boss is around, out of respect for their colleagues ). So how can one insure the team is doing the right things? The solution is root cause analysis , used statistically. From time to time the boss should identify a bug that was found and perhaps fixed, and have the team or even a member of the team figure out how the mistake made it into the code. Was an inspection on the affected code ever done? How much time was devoted to it? Were other problems identified? ( This implies collecting metrics, which is nearly painless when automated with tools like SmartBear's Code Collaborator ). It's possible the function was unusually complex and the review did find many other problems. So, were complexity metrics taken? Or gasp did the developers shortchange the inspection or skip it altogether? Perhaps the bug slipped in later, post-inspection, due to poor change management procedures. Bugs are part of the process, but proper bug management should be an equally important component. It's not possible to do a root-cause analysis of every, or even most, problems. But some level of it keeps the developers on the proper track and can identify flaws in the system that cause delays and quality problems. C.A.R. Hoare said: "The real value of tests is not that they detect bugs in the code but that they detect inadequacies in the methods, concentration, and skills of those who design and produce the code." And that observation is also true for looking for root causes of at least some of the bugs.
  • 热度 8
    2011-4-27 18:39
    1833 次阅读|
    0 个评论
    Regular readers know I'm a big fan of code inspections. Overwhelming evidence – and common sense – has shown that two heads are better than one at finding problems, and adding another team member or two yields even better code.   But inspections suck. Few developers really enjoy them. I sure don't, but we're professionals hired to build systems as efficiently and as perfectly as possible. A doctor might not enjoy giving prostate exams much, but to avoid such tests just out of personal distaste is both unprofessional and unacceptable. The same goes for the unsavory bits of our work.   Because of this common aversion it's not uncommon for a team that starts with all of the best intentions to slowly find reasons to avoid reviewing code; that nearly always eventually results in inspections becoming completely abandoned.   Because of this I feel inspections are one of the few areas where Genghis Khan software management is appropriate. A strong leader insists on their continued use. That person both supplies the resources needed and audits the process to insure compliance.   But you don't want management on the inspection team ( developers tend to avoid pointing out errors when the boss is around, out of respect for their colleagues ). So how can one insure the team is doing the right things?   The solution is root cause analysis , used statistically. From time to time the boss should identify a bug that was found and perhaps fixed, and have the team or even a member of the team figure out how the mistake made it into the code.   Was an inspection on the affected code ever done? How much time was devoted to it? Were other problems identified? ( This implies collecting metrics, which is nearly painless when automated with tools like SmartBear's Code Collaborator ). It's possible the function was unusually complex and the review did find many other problems. So, were complexity metrics taken?   Or – gasp – did the developers shortchange the inspection or skip it altogether?   Perhaps the bug slipped in later, post-inspection, due to poor change management procedures.   Bugs are part of the process, but proper bug management should be an equally important component. It's not possible to do a root-cause analysis of every, or even most, problems. But some level of it keeps the developers on the proper track and can identify flaws in the system that cause delays and quality problems.   C.A.R. Hoare said: "The real value of tests is not that they detect bugs in the code but that they detect inadequacies in the methods, concentration, and skills of those who design and produce the code." And that observation is also true for looking for root causes of at least some of the bugs.