tag 标签: developer

相关博文
  • 热度 18
    2015-3-5 07:59
    1671 次阅读|
    0 个评论
    Reader Dave Kellogg posed an intriguing question recently: what is the proper tester/developer ratio? Or, for projects where the developers do their own testing, what is the ratio of their time in the two activities? Surely there’s no single answer as that depends on the strategy used to develop the code and the needed reliability of the product. If completely undisciplined software engineering techniques were used to design commercial aircraft avionics (e.g., giving a million monkeys a text editor), then you’d hope there would be an army of testers for each developer. Or consider security. Test is probably the worst way to ensure a system is bullet-proof. Important? Sure. But no system will be secure unless that attribute is designed in. Microsoft is said to have a 1:1 ratio. When you consider the number of patches that stream out of that company one might think that either better developers or processes should be used, or a bunch more testers. The quality revolution taught us that you cannot test quality into a product. In fact, tests usually exercise only half the code! (Of course, with test coverage that number can be greatly improved). Capers Jones and Olivier Bonsignour, in The Economics of Software Quality (2012, Pearson Education) show that many kinds of testing are needed for reliable products. On large projects, up to 16 different forms of testing are sometimes used. Jones and Bonsignour don’t address Dave’s question directly, but do provide some useful data. It is all given in function points, as Jones, especially, is known for espousing them over lines of code (LOC). But function points are a metric few practitioners use or understand. We do know that in C, one function point represents roughly 120 LOC, so despite the imprecision of that metric, I’ve translated their function point results to LOC. They have found that, on average, companies create 55 test cases per function point. That is, companies typically create almost one test per two lines of C. The average test case takes 35 minutes to write and 15 to run. Another 84 minutes are consumed fixing bugs and re-running the tests. Most tests won’t find problems; that 84 minutes is the average, including those tests that run successfully. The authors emphasize that the data has a high standard deviation so we should be cautious in doing much math, but a little is instructive. One test for every two lines of code consumes 35+15+84 minutes. Let’s call it an hour’s work per line of code. That’s a hard-to-believe number but, according to the authors, represents companies doing extensive, multi-layered, testing. Most data shows the average developer writes 200 to 300 LOC of production code per month. No one believes this as we’re all superprogrammers. But you may be surprised! I see a ton of people working on complex products that no one completely understands, and they may squeak out just a little code each month. Or, they’re consumed with bug fixes, which effectively generate no new code at all. Others crank massive amounts of code in a short time but then all but stop, spending months in maintenance, support, requirements analysis for new products, design, or any of a number of non-coding activities. One hour of test per LOC means two testers (160 hours/month each) are needed for the developer creating about 300 LOC/month. One test per two LOC is another number that seems unlikely, but a little math shows it isn’t an outrageous figure. One version of the Linux kernel I have averages 17.6 statements per function, with an average cyclomatic complexity of 4.7. Since complexity is the minimum number of tests needed, at least one test is needed per four lines of code. Maybe a lot more; complexity doesn’t give an upper bound. So one per two lines of code could be right, and is certainly not off by very much. Jones’ and Bonsignour’s data is skewed towards large companies on large projects. Smaller efforts may see different results. They do note that judicious use of static analysis and code inspections greatly changes the results, since these two techniques, used together and used effectively, can eliminate 97% of all defects pre-test. But they admit that few of their clients exercise much discipline with the methods. If 97% of the defects were simply not there, that 84 minutes of rework drops to 2.5 and well over half the testing effort goes away. Yet the code undergoes exactly the same set of tests. (Here’s another way to play with the numbers. The average embedded project removes 95% of all defects before shipping. Use static analysis and inspections effectively, and one could completely skip testing and still have higher quality than the average organization! I don’t advocate this, of course, since we should aspire to extremely high quality levels. But it does make you think. And, though the authors say that static analysis and inspections can eliminate 97% of defects, that’s a far higher number than I have seen.) The authors don’t address alternative strategies. Tools exist that will create tests automatically. I have a copy of LDRA Unit here, which is extraordinary at creating unit tests, and I plan to report on it in more detail in a future article. Test is no panacea. But it’s a critical part of generating good code. It’s best to view quality as a series of filters: each activity removes some percentage of the defects. Inspections, compiler warnings, static analysis, lint, test, and all of the other steps we use each filters out bugs. Jones’ and Bonsignour’s results are fascinating, but like so much empirical software data one has to be wary of assigning too much weight to any single result. It’s best to think of it like an impressionistic painting that gives a suggestion of the underlying reality, rather than as hard science. Still, their metrics give us some data to work from, and data is sorely needed in this industry. What about you? How many tests do you create per LOC or function point?
  • 热度 14
    2013-5-15 20:53
    1969 次阅读|
    0 个评论
    Do you think old engineers are obsolete dinosaurs? Plenty of anecdotal evidence suggests that employers prefer younger engineers over one who is 50 or older. But a new study suggests the peril of that position. In "Is Programming Knowledge Related to Age?", a recent paper by Patrick Morrison and Emerson Murphy-Hill, the authors ran a big-data experiment to see if ageing developers have trouble with the latest technology. The experiment is somewhat crudely-crafted (perhaps the study's authors are greybeards). By tracking responses to questions on Stack Overflow they correlate the site's "reputation" statistics against age. Interestingly the vast majority of participants on that site are youngsters, clustered around 29. Turns out, old folks rock. Reputation on Stack Overflow peaks around age 50 and does show a sharp decline by 70. Even at that not-so-advanced point in life, the average is about that of a thirty-year old and is much higher than someone five years younger. While young folks show little standard deviation, oldsters reputation varies wildly, with plenty of data points well above the average (and some well below). By noting the kinds of questions Stack Overflow participants respond to, the researchers determined that older developers have a significantly wider range of skills than young people. That levels out around age 50 and enters only a modest decline later in life. Again, the standard deviation is huge. How much of that knowledge is about new technologies? Here the results are less clear, though the authors believe their results show age does not confine one to the tech of yesteryear. There are some real problems with the study. No raw data is presented; it's all expressed in graphs and reduced summaries. But it sure appears that there are only a handful of people studied over age 45. And the experiment took place within the narrow confines of Stack Overflow, and using reputation as a proxy for knowledge, both of which are somewhat suspect as determining anything about the developer population as a whole. But as one who will achieve the ripe age of 0x3c shortly, the results are encouraging. I intend to forget any data to the contrary, ignore the shorts caused by my shaking hands soldering SMT components, and continue to design MCUs with vacuum tubes. What is your experience?  
  • 热度 26
    2013-3-31 20:52
    1338 次阅读|
    0 个评论
        北京国际汽车维修检测诊断设备及汽车养护展览会 展会场馆:北京顺义天竺裕祥路88号中国国际展览中心(新馆) 举办时间:2013年4月1日~2013年4月3日   2013中国国际国防信息化技术与气象装备展览会  展会场馆:北京朝阳区静安庄中国国际展览中心 举办时间:2013年4月1日~2013年4月3日   中国国际云计算技术和应用展览会  展会场馆:北京朝阳区静安庄中国国际展览中心 举办时间:2013年4月7日~2013年4月9日   2013年中国(北京)国际照明展览会暨LED照明技术与应用展览会  展会场馆:北京朝阳区静安庄中国国际展览中心 举办时间:2013年4月25日~2013年4月27日   Intel Developer Forum 展会场馆:北京朝阳区国家会议中心 举办时间:2013年4月10日~2013年4月11日   北京国际视听集成设备与技术展览会 展会场馆:北京朝阳区国家会议中心 举办时间:2013年4月10日~2013年4月12日   2013北京国际喷印雕刻标识技术展览会 展会场馆:北京朝阳区国家会议中心 举办时间:2013年4月2日~2013年4月4日   第十二届中国国际大屏幕系统集成设备展览会  展会场馆:北京展览馆 举办时间:2013年4月6日~2013年4月8日
  • 热度 21
    2013-3-31 20:51
    1328 次阅读|
    0 个评论
      北京国际汽车维修检测诊断设备及汽车养护展览会 展会场馆:北京顺义天竺裕祥路88号中国国际展览中心(新馆) 举办时间:2013年4月1日~2013年4月3日   2013中国国际国防信息化技术与气象装备展览会  展会场馆:北京朝阳区静安庄中国国际展览中心 举办时间:2013年4月1日~2013年4月3日   中国国际云计算技术和应用展览会  展会场馆:北京朝阳区静安庄中国国际展览中心 举办时间:2013年4月7日~2013年4月9日   2013年中国(北京)国际照明展览会暨LED照明技术与应用展览会  展会场馆:北京朝阳区静安庄中国国际展览中心 举办时间:2013年4月25日~2013年4月27日   Intel Developer Forum 展会场馆:北京朝阳区国家会议中心 举办时间:2013年4月10日~2013年4月11日   北京国际视听集成设备与技术展览会 展会场馆:北京朝阳区国家会议中心 举办时间:2013年4月10日~2013年4月12日   2013北京国际喷印雕刻标识技术展览会 展会场馆:北京朝阳区国家会议中心 举办时间:2013年4月2日~2013年4月4日   第十二届中国国际大屏幕系统集成设备展览会  展会场馆:北京展览馆 举办时间:2013年4月6日~2013年4月8日    
相关资源
  • 所需E币: 0
    时间: 2022-1-17 20:58
    大小: 3.28MB
    上传者: 许文龙
    Azure开发基础知识,嵌入式系统开发参考资料。
  • 所需E币: 0
    时间: 2021-4-26 23:30
    大小: 59.56KB
    上传者: Argent
    AI产品层出不穷,手里收藏了有关电子通信,毕业设计等资料,方案诸多,可实施性强。单片机的应用开发,外设的综合运用,纵使智能产品设计多么复杂,但其实现的基本功能都离不开MCU的电路设计与驱动编程,无论是使用51单片机还是AVR单片机,其方案的选择因项目需求而定,需要这方面资料的工程师们,看过来吧。
  • 所需E币: 3
    时间: 2021-4-14 22:04
    大小: 31.38MB
    上传者: xgp416
    GXDeveloperVer8操作手册.pdf
  • 所需E币: 3
    时间: 2020-4-7 10:28
    大小: 2.58MB
    上传者: 2iot
    sirf新平台资料,SiRFatlasIVDeveloper……