tag 标签: firmware

相关博文
  • 热度 26
    2015-8-14 19:55
    2209 次阅读|
    0 个评论
    Here is my proposed definition of firmware:   ‘Firmware is the art of designing software that successfully controls and/or monitors the physical and natural world through electronics.’   It’s not your father’s definition is it? But I would argue the definition has actually changed over the years.   Words change their meaning when the old meaning no longer has value or is obsoleted by the modern world. This is called (yes there really is a name for this) etymology. For example computer used to mean a person who computes.   Argh! My ears cannot hear it! English!! Engineers hate English! It’s our kryptonite! Which is also why engineers hate writing things down or creating requirement documentation. But a poorly written requirement will wreak havoc and chaos on your project.   So does a poor definition wreak havoc!   So as much as we all profess our love of math and hate (ok have a strong distaste for) English, having no clear definition for what hundreds of thousands of engineers use to define their profession is not helpful.   What is the present definition of the word firmware you ask? Oh it’s a mess. I did some research on this. I’ve also done various surveys on the word through the firmware LinkedIn group. None of it’s pretty.   I went searching for the definition at IEEE’s standards definition database. Sound like the logical source right?   I found 11 different definitions.   Yes 11 variants of a definition that is no longer relevant. Regardless, the going theme seems to be software that resides in non-volatile storage that can be read only by a computer.   Really. Or should I say really not helpful.   How does this convey anything useful and distinguish it from software? Is this what you tell people when they ask you what firmware is? How is this definition really different then saying firmware is software? All that is added to the definition is the software’s residence -- that being non-volatile memory (well at least until you move said firmware into RAM, but I digress).   In an attempt to improve things, many have abandoned the use of the word firmware and replaced it with embedded software. Not sure how this solves anything. In fact I think it makes matters worse. It ties it even tighter to software and lessens the useful distinction.   Yet how many computer scientists do you know that write firmware? I’ve done studies on this and it’s very few. In a recent survey I did of degrees held by firmware engineers, I found that of 377 sampled, 13% held a bachelors in computer science. The great majority (43%) had a bachelor’s degree in electrical engineering. About 80% had an engineering degree of some kind. Clearly there is an important engineering role in firmware that is sorely missing from 1) your father’s definition, and 2) definitions characterizing it as software, which is not a field of engineering.   So firmware is a field of engineering. No, you can’t call it software engineering since this is the engineering of software itself. In fact I still fail to see how calling us ‘embedded software engineers’ is helpful since embedded just categorizes a field of software engineering.   Firmware engineers must have a strong grasp of engineering first and foremost. After that they need a good understanding of electronics. I say this because a well-written piece of firmware should have a driver layer that connects directly to electronic circuits. Also a well-written piece of firmware should also have some kind of device layer where the devices are often electronic sub-systems. For example if you are to measure voltage, you should understand at a high level the electronics that create that voltage. Much of the application layer must rely on engineering concepts. Similarly, if I am controlling a mechanical system through electronic stimulus, I need to have some understanding of both electrical and mechanical engineering.   Eventually you escape the engineering such as in a GUI implementation on a TFT LCD touchscreen. So at some point it departs from engineering yet remains a science -- computer science in fact. It is at this point where the ‘software’ that resides in the non-volatile storage really is purely software.   So which definition do you think would help clarify to friends and family, organizations, corporations, colleges and Universities?   ' Software that resides in non-volatile storage '?   Or…   ‘Firmware is the art of designing software that successfully controls and/or monitors the physical and natural world through electronics.’   Back to the trenches! Thanks for reading!   Bob Scaccia is President of USA Firmware, a Software, Hardware, Firmware and IoT Consulting and Design Services Company located in Brecksville, Ohio. Bob also runs the largest Embedded Software group on LinkedIn with 14,000 members and growing and has written articles on firmware curriculum, trends, best practices and using object oriented approaches in the C language.
  • 热度 17
    2012-11-21 19:52
    1792 次阅读|
    0 个评论
    The marketing claims of how a company's products can improve time to market, improve quality, and lower costs have been heard a million times. The three pillars of marketing rhetoric. It reminds me of those old comic book ads where some 97-pound weakling gets sand kicked in his face with his girlfriend nearby and then sends away for a body-building book. The final scene shows him as the hero of the beach after he beats up on the bully.   Times have definitely changed but the marketing hype hasn't, though there's certainly a place and time for pulling up a few tried and true marketing campaigns. More on that later. Instead of marking rhetoric, I'm going to describe two real case studies where the design teams on different continents were able to reduce their design cycle time and how they did it. In both of these examples, the design teams used virtual prototyping and accurate models to drastically reduce their development time. In each case, they went into production with the designs they developed with this methodology much earlier than they would have if they didn't use it. Case Study #1: A large European company designing a controller application for factory automation was implementing a system on chip (SoC) with ARM Cortex R series cores. The design team had already been using 100% cycle-accurate models and virtual prototyping tools to bring up, debug and optimise its firmware prior to having silicon available. This was a key part of the methodology because in the past they found that even if the firmware ran successfully on behavioural untimed models (or Fast Models), they still found many problems in the firmware after they replaced the behavioural models with cycle-accurate models. The reason for this was due to the fact that the behavioural models lacked the details and accuracy to expose many of the firmware bugs. The manager was confident in his methodology but wanted to quantify it somehow. He compared his virtual prototyping flow to similar projects that didn't use virtual prototyping. For projects using virtual prototyping, he found the firmware could be debugged and validated in about 4 to 5 weeks. For projects that didn't use virtual prototyping, he found the firmware took as much as 6 months to debug and validate. A five-month savings! The reasons for this savings was that the 100% accuracy, 100% visibility and controllability available in the virtual prototypes made bugs much easier to find, characterize, and debug compared with hardware prototypes. Because he used cycle-accurate models, they faithfully represented the real implementation and found problems that would have otherwise been hidden and impossible to find in higher level non-timed functional models.     Case Study #2: A fabless semiconductor company in Asia's project team was designing an SoC to go into tablet and netbook products. The design was based on a dual core ARM Cortex A processor with graphics, video, and image processing components interconnected with a complex AXI fabric. They needed to architect the system, develop their firmware and device drivers, and understand the go-to-market performance of their system in a very short timeframe. The problem was a big one: This was their first product based on the ARM Cortex A-series cores, so everything (architecture, hardware, and firmware) had to be designed from scratch and they had no past experience to fall back on. To overcome this daunting task, the team used a virtual prototype and 100% cycle-accurate models to provide the insight for their architectural and firmware teams to understand, optimise and debug their system. This platform allowed the architects to make quick and accurate decisions and provided the firmware engineers the insight to understand the dynamics of the new IP. In the end, they had a working silicon prototype ready for their customers 9 months after starting the architectural phase of their project. An amazing feat –– architecture to working silicon prototypes in 9 months! In addition, they spent less than 1 month in the lab debugging their silicon prototype before they had a system that they had enough confidence in to provide to their end customers. The team commented that, without the virtual prototype and cycle-accurate models, achieving this type of schedule would have been impossible to do. These are just two of many examples where using virtual prototyping with accurate models has proven to reduce the design cycle time to realise an SoC. And, while I left the marketing hype and rhetoric out of this article, I do acknowledge the importance of image building and creating interest, key aspects of a good marketing and messaging campaign. Without either, it's unlikely I'd be able to write these short case studies. These companies in Europe and Asia would have no way of knowing about this particular virtual prototyping tool or the cycle-accurate models.     About the author Andy Ladd is vice president of Sales for Europe and Rest of World, and is a member of the founding team at Carbon Design Systems. He has more than 16 years experience in the EDA industry managing and directing field resources. Previously, he was the director of applications for cycle-based simulation at Quickturn Design Systems which was acquired by Cadence. Prior to Quickturn, Ladd was part of the initial bring-up team for Speedsim, the first commercially viable cycle-based simulation product in the industry. He also worked on supporting major accounts while working at Viewlogic. In addition to his EDA experience, he worked in microprocessor design and development at both Digital Equipment Corporation and McDonnell Douglas. Ladd holds a Bachelor of Science degree in Computer Engineering from University of Illinois and a Master of Science degree in Computer Science and Engineering from University of Michigan.  
  • 热度 26
    2012-2-1 18:42
    1605 次阅读|
    0 个评论
    Note: This "How it used to be" story told by Benny Attar truly takes me back in time. When I started with Cirrus Designs back in 1981, we had a single PDP 11/23 computer that we all shared (we each had a VT100 terminal and a keyboard on our desks). The single hard disc supported 1 megabyte and there was only one directory / folder. I can remember when we got a VAX running the VMS operating system where we each had our own directory trees... it all seemed so revolutionary back then (grin). After completing my studies in a technical college in 1984, I joined the Services – never mind what service, that's classified – and was put in charge of the computer room. In those days, all the computers were in a central location, with restricted access, climate control, raised floors, and servants to look after them. Ordinary mortals had computer terminals and printers connected to the computer room through RS232 lines at 9600 baud (the longer lines wouldn't work beyond 4800 baud – for some lines we even had current loop converters). Almost all our equipment came from DEC – we had a couple of PDP11/70's ( I remember being told they cost $120,000 each), a PDP11/44, and a VAX 730. The VAX 750's came later, then the 8300 and 8500 series, then things got smaller with the micro-VAX 3500, 3200 and 3100 computers in the early 90's. The end users were given VT100 and VT102 monochrome monitors, and mighty heavy to carry around they were. The VT100 had a bug in the firmware, if you hit a certain sequence in setup – esc-shift-3 then q, I think it was – you would get a screech from the keyboard speaker. One of our programmers wrote a program that mimicked the terminals' setup screen. Run the program, and apparently you would be locked for eternity in setup. Data storage was on a bank of 6 removable disc drives, each the size of a washing machine and holding 67 megabytes. Of course, we had the mandatory reel-to-reel tapes and 8 inch floppies. I was in charge of a small team of "computer operators" – there did exist such a job description once – who ran the daily backups, changed printer ribbons (the ribbons were packed as mobius loops so they'd be used on both sides), distributed printer paper (fanfold with sprocket holes) and taught new personnel how to operate a terminal ("Press setup-0 to reset the terminal...). One of my young operators went on to complete 2 engineering degrees and a doctorate in computer science, and is now a senior research scientist in the field. We also had some Evans and Sutherland graphics computers for one of the projects involving tactical map displays. The controllers were housed in their own cabinets, with backplanes full of PCB's – mostly TTL standard logic and some 2901 bit-slice processors. We had just 2 colour displays, giant table sized cabinets with 21 inch displays and huge temperamental power supplies. They needed constant adjustment to keep the red, green and blue beams aligned, they needed degaussing with an external degauss ring, and they kept blowing transistors in the deflection amplifiers. The black and white monitors never gave any trouble, though. Of course in those days all this hardware came with shelves of ring binders full of schematics, maintenance instructions, and software descriptions. Other peripherals included XY magnetic tablets (before the days of computer mice) and pen-plotters for printing maps. When the PDP11/70's were retired, I took a screwdriver and dismantled the front panel of one of them, with all the address/data entry keys and register lights. A colleague took the panel from the second one. I still have it, nearly 25 years after the last program ran. When the PC revolution came and computing was no longer centralized, the job wasn't interesting anymore, so I transferred to a job working with automatic test equipment, and found myself with a Teradyne L210 tester and a few VAX stations. When I retired from the job, I think I was one of last people in the organisation who still worked with the VAX/VMS operating system.  
  • 热度 15
    2011-3-17 17:44
    1615 次阅读|
    0 个评论
    In December 2009, my fellow columnist Michael Barr published the first column in his Barr Code series, entitled "Bug-killing standards for firmware coding." In it, he recommended ten "coding rules you can follow to reduce or eliminate certain types of firmware bugs." Needless to say, the column elicited a torrent of comments. Some of these comments actually considered the merits of his recommendations. However, many comments wandered off onto the subject of brace placement, an issue Michael didn't address explicitly in his article. (He did recommend using braces even when they are optional; however, he didn't recommend a preferred style for placing them.)   The flow of the discussion was quite typical of what I've seen or heard in other such discussions. In this case, the opening salvo came when one commenter asserted that using the "Allman" brace placement style caused bugs that using the "One True" brace placement style cured. Another commenter replied that, despite the previous claim, the Allman style is easier to read. And so it went.   Let me first say that I enjoy talking about programming style—not just about brace placement, but about style in general. After more than 30 years in the computing business, I've heard many of the arguments already, but I still hear new ideas often enough to keep me coming back for more.   On the other hand, I'm rather dismayed that nearly all programmers who participate in such discussions apparently do so with the tacit assumption that there's no objective basis—no science—that we can use to discern a preferred style. The discussions traffic almost entirely in anecdotes, personal observations and largely unsubstantiated assertions. I can't recall the last time I heard or read anyone suggest that we ought to measure how style choices affect code quality, let alone how to conduct an appropriate experiment.   For example, advocates of the One True style claim that it's better than the other styles at revealing extraneous semicolons after conditionals. That is, it's easier to spot and avoid the spurious semicolon in:   while (condition); { statement(s) }   than it is in:   while (condition); { statement(s) }   This seems plausible, but I'd like to know how often this problem actually arises. The answer I'm looking for isn't "Lots." It's a number indicating defects per unit of code. I'd like to see a number for each brace placement style so I can assess how effectively each style reveals the error.   Moreover, as another commenter indicated, using static analyzers might be a more effective strategy for detecting this error, so much so that it renders moot any advantage that one style has over another in this regard. A few stats might tell us if this is really so.   Stray semicolons are probably not the only coding gaffes whose presence or absence in code might suggest an advantage of one brace placement style over another. Wouldn't it be nice to see a fairly comprehensive catalog of such errors along with statistics indicating the relative effectiveness of each style at revealing each error?   Lest the One True style proponents think I'm picking only on them, I'll note that proponents of the Allman style claim that it's easier to read and is better at exposing mismatching braces. To them I ask, "By what measure?"   How do we find this evidence? Before we start thinking about how to produce it, we should begin by asking, "Does it already exist?" I've searched the web for studies of the relative effectiveness of brace placement styles, but found nothing. I've found articles on other aspects of coding style, such as indenting depth (two to four spaces seems to be best at improving comprehension), but nothing on brace (or begin-end pair) placement. But, just because I haven't found any research, that doesn't mean it doesn't exist. So I have this request:   If you know of quantitative research results on the relative effectiveness of brace placement styles at improving code quality, please write to me and tell me where to find them.   (Brace placement is certainly not the only style controversy that could benefit from more rigorous analysis, but I'd like to limit the scope of this social experiment to just brace placement for now.)   I'm skeptical that such findings exist, but I'd be happy to be wrong on this. If they exist, why do so few people cite them? If they don't, why doesn't anyone seem to care, or even notice?
  • 热度 22
    2011-3-17 17:40
    1854 次阅读|
    0 个评论
    In December 2009, my fellow columnist Michael Barr published the first column in his Barr Code series, entitled "Bug-killing standards for firmware coding." In it, he recommended ten "coding rules you can follow to reduce or eliminate certain types of firmware bugs." Needless to say, the column elicited a torrent of comments. Some of these comments actually considered the merits of his recommendations. However, many comments wandered off onto the subject of brace placement, an issue Michael didn't address explicitly in his article. (He did recommend using braces even when they are optional; however, he didn't recommend a preferred style for placing them.)   The flow of the discussion was quite typical of what I've seen or heard in other such discussions. In this case, the opening salvo came when one commenter asserted that using the "Allman" brace placement style caused bugs that using the "One True" brace placement style cured. Another commenter replied that, despite the previous claim, the Allman style is easier to read. And so it went.   Let me first say that I enjoy talking about programming style—not just about brace placement, but about style in general. After more than 30 years in the computing business, I've heard many of the arguments already, but I still hear new ideas often enough to keep me coming back for more.   On the other hand, I'm rather dismayed that nearly all programmers who participate in such discussions apparently do so with the tacit assumption that there's no objective basis—no science—that we can use to discern a preferred style. The discussions traffic almost entirely in anecdotes, personal observations and largely unsubstantiated assertions. I can't recall the last time I heard or read anyone suggest that we ought to measure how style choices affect code quality, let alone how to conduct an appropriate experiment.   For example, advocates of the One True style claim that it's better than the other styles at revealing extraneous semicolons after conditionals. That is, it's easier to spot and avoid the spurious semicolon in:   while (condition); { statement(s) }   than it is in:   while (condition); { statement(s) }   This seems plausible, but I'd like to know how often this problem actually arises. The answer I'm looking for isn't "Lots." It's a number indicating defects per unit of code. I'd like to see a number for each brace placement style so I can assess how effectively each style reveals the error.   Moreover, as another commenter indicated, using static analyzers might be a more effective strategy for detecting this error, so much so that it renders moot any advantage that one style has over another in this regard. A few stats might tell us if this is really so.   Stray semicolons are probably not the only coding gaffes whose presence or absence in code might suggest an advantage of one brace placement style over another. Wouldn't it be nice to see a fairly comprehensive catalog of such errors along with statistics indicating the relative effectiveness of each style at revealing each error?   Lest the One True style proponents think I'm picking only on them, I'll note that proponents of the Allman style claim that it's easier to read and is better at exposing mismatching braces. To them I ask, "By what measure?"   How do we find this evidence? Before we start thinking about how to produce it, we should begin by asking, "Does it already exist?" I've searched the web for studies of the relative effectiveness of brace placement styles, but found nothing. I've found articles on other aspects of coding style, such as indenting depth (two to four spaces seems to be best at improving comprehension), but nothing on brace (or begin-end pair) placement. But, just because I haven't found any research, that doesn't mean it doesn't exist. So I have this request:   If you know of quantitative research results on the relative effectiveness of brace placement styles at improving code quality, please write to me and tell me where to find them.   (Brace placement is certainly not the only style controversy that could benefit from more rigorous analysis, but I'd like to limit the scope of this social experiment to just brace placement for now.)   I'm skeptical that such findings exist, but I'd be happy to be wrong on this. If they exist, why do so few people cite them? If they don't, why doesn't anyone seem to care, or even notice?    
相关资源
  • 所需E币: 0
    时间: 2020-6-16 16:36
    大小: 3.38MB
    上传者: zendy_731593397
    STM32firmwarelibrary硬件工程库
  • 所需E币: 4
    时间: 2019-12-26 00:30
    大小: 33.11KB
    上传者: wsu_w_hotmail.com
    firmwarefor4510+d12portedbywenbbointaiyuanunivoftech……
  • 所需E币: 5
    时间: 2019-12-25 16:09
    大小: 587.82KB
    上传者: 978461154_qq
    ARMstar板介绍ARMSTAR板介绍北京微芯力科技有限公司主要内容ARMSTAR板介绍硬件介绍软件介绍ARMSTAR板介绍ARMSTAR板是一块包含最小核系统的ARM7入门平台它的强大功能和复杂性使其可作为ARM技术开发的平台这块入门板可完成下载和调试软件扩展实验性的IO设备和外围设备ARMSTAR板介绍ARMSTAR板包含了以下主要组件zSAMSUNGS3C4510B微控制器z512KBFLASH(1M或2MFLASH)z512KBSRAMz两个9针D型RS232连接器z复位和中断按键ARMSTAR板介绍z四只用户可编程发光二极管及七段数码管z四路用户输入DIP开关z多重ICE接口z10MHz时钟处理器将其倍频至50MHzz5V稳压电源ARMSTAR板介绍系统要求用已有预装引导监控程序的ARMSTAR板,通过DEBUG串行端口与一台运行终端应用程序的计算机连接用Angel或者Multi-ICE生成和调试代码时需用相应的开发工具与计算机相连光盘中提供所需工具ARMSTAR板介绍ARMSTAR板的设置ARMSTAR板是一块完全面向ARM7评估平台开发的评估入门板除主机外套件除包括一个典型的软件开发环境外还包含所有可评估ARM系统的组件ARMSTAR板介绍可通过以下途径进行使用z用bootSTARploaderz用Angel调试监控程序z用Multi-ICE硬件介绍套件主要包含以下硬件zARMSTAR板介绍z9针直连R……
  • 所需E币: 5
    时间: 2019-12-25 16:02
    大小: 522.06KB
    上传者: 978461154_qq
    很经典的资料EmbeddedSystemsFirmwareDemystiedEdSutterCMPBooksLawrence,Kansas66046CMPBooksCMPMediaLLC1601West23rdStreet,Suite200Lawrence,Kansas66046USAwww.cmpbooks.comDesignationsusedbycompaniestodistinguishtheirproductsareoftenclaimedastrademarks.InallinstanceswhereCMPBooksisawareofatrademarkclaim,theproductnameappearsininitialcapitalletters,inallcapitalletters,orinaccordancewiththevendor’scapitalizationpreference.Readersshouldcontacttheappropriatecompaniesformorecompleteinformationontrademarksandtrademarkregistrations.Alltrademarksandregisteredtrademarksinthisbookaretheprop-ertyoftheirrespectiveholders.Copyright2002byLucentTechnologies,exceptwherenotedot……
  • 所需E币: 5
    时间: 2020-1-4 11:52
    大小: 188.27KB
    上传者: 微风DS
    Firmware_+mxchipWNet_DTU_5.90.230.1……
  • 所需E币: 3
    时间: 2019-12-24 18:06
    大小: 1.8MB
    上传者: 2iot
    firmware,BSL……
  • 所需E币: 3
    时间: 2019-12-24 18:06
    大小: 741.98KB
    上传者: quw431979_163.com
    firmware,BSLApplicationReportSLAA452……
  • 所需E币: 3
    时间: 2019-12-24 18:06
    大小: 2.36MB
    上传者: rdg1993
    firmware,BSL……
  • 所需E币: 5
    时间: 2019-12-24 18:06
    大小: 2.36MB
    上传者: 二不过三
    firmware……
  • 所需E币: 5
    时间: 2019-12-24 17:58
    大小: 358.27KB
    上传者: wsu_w_hotmail.com
    摘要:本文介绍78M6612分相固件1.0版,可用于与Teridian公司的78M6612,单相,双插座电源和电能计量IC。此固件明确规定在分相系统和细节简单的方法,校准和报警监控的宽带测量。AMaximIntegratedProductsBrand78M6612Split-PhaseFirmwareV1.0DescriptionDocumentMay24,2011Rev.1.0UG_6612_07478M6612Split-PhaseFirmwareV1.0DescriptionDocumentUG_6612_074MaximcannotassumeresponsibilityforuseofanycircuitryotherthancircuitryentirelyembodiedinaMaximproduct.Nocircuitpatentlicensesareimplied.Maximreservestherighttochangethecircuitryandspecificationswithoutnoticeatanytime.MaximIntegratedProducts,120SanGabrielDrive,Sunnyvale,CA94086408-737-76002011MaximIntegratedProducts……
  • 所需E币: 4
    时间: 2019-12-24 17:59
    大小: 942.57KB
    上传者: 238112554_qq
    摘要:该参考设计提供了一个完整的射频工业/科学/医疗(ISM射频)产品使用在无线自动抄表系统(AMR)应用示范平台。本文件包含的硬件,固件和系统结构的要求,为实现一个AMR设计。Maxim>DesignSupport>TechnicalDocuments>ReferenceDesigns>EnergyMeasurement&Metering>APP5391Maxim>DesignSupport>TechnicalDocuments>ReferenceDesigns>Microcontrollers>APP5391Maxim>DesignSupport>TechnicalDocuments>ReferenceDesigns>WirelessandRF>APP5391Keywords:wirelessautomaticmeterreading,AMR,transceiver,transmitter,receiver,ISM,RF,radio,PCB,layout,schematic,hardware,firmware,microcontroller,utilitycompany,smartgrid,protocol,antenna,link,rangeMay04,2012REFERENCEDESIGN5391INCLUDES:TestedCircuitSchematicBOMBoardAvailableDescriptionTestDataSoftwareLayoutLFRD002:WirelessAutomaticMeterReadingReferen……
  • 所需E币: 5
    时间: 2019-12-24 02:04
    大小: 188.04KB
    上传者: 16245458_qq.com
    Firmware_+mxchipWNet_DTU_5.90.230.3……
  • 所需E币: 4
    时间: 2019-12-24 01:09
    大小: 61.68KB
    上传者: 2iot
    crazyradio-firmware-master……
  • 所需E币: 4
    时间: 2019-12-19 15:01
    大小: 641.5KB
    上传者: 16245458_qq.com
    GettingstartedwiththeSTM32Nucleoboardfirmwarepackage……
  • 所需E币: 4
    时间: 2019-12-19 15:01
    大小: 316.68KB
    上传者: 16245458_qq.com
    GettingstartedwithSTM32CubeL0firmwarepackageforSTM32L0xxseries……
  • 所需E币: 3
    时间: 2019-12-19 13:33
    大小: 361.52KB
    上传者: quw431979_163.com
    支持STM32系列的ST-Link固件升级包(ST-LinkfirmwareupgradeforsupportingtheSTM32family)……