tag 标签: code

相关博文
  • 热度 24
    2015-8-28 22:13
    1454 次阅读|
    0 个评论
    Continued from Comparing techniques for storing Morse code in C (Part 1) Storing symbols as bytes The overall architecture and flow of this program is very similar to the string version (you can access the entire program by clicking here ). The first difference occurs when we declare our dotsNdashes = {B01000010, // 0 = A ".-"                       B10000001, // 1 = B "-..."                       B10000101, // 2 = C "-.-."                       B01100001, // 3 = D "-.."                        :                       etc.   As we discussed in the overview, the three MSBs contain the number of dot-dash symbols (the two MSBs in the case of the punctuation characters), while the LSBs contain the symbols themselves; 0 = dot and 1 = dash, with the symbols organized in reverse order such that the first symbol appears in the LSB.   Our loop() function is identical to the previous example: void loop() {   displayThis("Hello World"); }   Our displayThis() function appears to be identical to the previous example -- it's word-for-word identical -- but there is a subtle difference regarding the way in which the compiler handles things. Consider the following statement from the displayThis() function:      flashLED(dotsNdashes );   In the case of our previous "store as strings" program, this statement calls our flashLED() function and passes it a pointer to a string of characters as an argument. By comparison, in the case of our "store as bytes" program, the argument is a single char (which we end up treating as a byte): void flashLED(byte dNdByte) {    :   // Lots of clever stuff    : } The next big difference between our two programs occurs in the body of our flashLED() function. First of all, we extract the number of symbols forming this character (the three MSBs), shift them five bits to the right, and store this value in a variable called "count":   byte count; count = (dNdByte 5) B00000111; if (count = 6) count = B00000110; Since byte is an unsigned data type, shifting it to the right should theoretically cause 0s to be shifted into the MSBs, but I don’t like to take any chances, so -- following the shift -- the " B00000111" (bitwise AND) operation forces the five MSBs to be 0s. Now, remember that the coding scheme we're using looks like the following: // 000 xxxxx 0 symbols (not used) // 001 xxxx? 1 symbol // 010 xxx?? 2 symbols // 011 xx??? 3 symbols // 100 x???? 4 symbols // 101 ????? 5 symbols // 11 ?????? 6 symbols Generally speaking, we have three count bits and five data bits. If the count indicates six symbols, however, then we have only two count bits and six data bits. In this case, the original bit 5, a copy of which now finds itself in the LSB of count, could be a 0 or a 1. This explains the statement " if (count = 6) count = B00000110;", which ensures that a count of six or seven is forced to be six, because a value of seven means that the LS bit of the count contains an unwanted data bit of 1. Now, I have to admit that I ran into a bit of a problem with the following, which is the part where we actually extract and display the dots and dashes: for (int i = 0; i count; i++) {    digitalWrite(pinLED,HIGH); // LED On   if (dNdByte B00000001 == 0) // It's a dot     delay(elementDelay / WPM); // Dot delay   else // It's a dash     delay((elementDelay * 3) / WPM); // Dash delay   dNdByte = dNdByte 1;   digitalWrite(pinLED,LOW); // LED Off   delay(elementDelay / WPM); // Inter-symbol delay } When I executed this and observed my flashing LED, I realized that I only appeared to be seeing dashes. This struck me as being a bit strange, so I stuffed a lot of Serial.print() statements into my program to determine what was going on. Here's what I saw in the Serial I/O window: As you see, all I'm generating is dashes. Eventually I focused my beady eye on the following statement:   if (dNdByte B00000001 == 0) // It's a dot     delay(elementDelay / WPM); // Dot delay   else // It's a dash     delay((elementDelay * 3) / WPM); // Dash delay In particular, I considered the "== 0" logical comparison. I couldn’t see how this could be wrong, but when I changed it to read "== 1", I then saw the following results. "Well, strike me with a kipper," I thought, "that's strange and no mistake." Now we are seeing dots and dashes, but they are inverted; i.e., a dot is displayed as a dash and vice versa. In fact, this is what we'd expect because we've inverted our test from "== 0 " to "== 1". But, in that case, why didn’t the "== 0" do what we expected in the first place? I tell you, I stared and stared at this, but nothing came to mind, so eventually I called my chum Ivan Cowie over from the next bay to take a look. Ivan suggested taking my statement:   if (dNdByte B00000001 == 0) // It's a dot And adding parentheses as follows:   if ((dNdByte B00000001) == 0) // It's a dot So we did so, and everything worked as expected. "Well, blow me down with a feather," I thought, "who would have thunk?" I personally would have assumed that a bitwise operator like , |, or ^ would have a higher precedence than a logical comparison (relational) operator like == or !=, but it appears that this is not the case (see also this list C++ operator preferences ). It took me a while to wrap my brain around what was happening here. This thing is that when we use an "if ( {condition is true} ) {do this}" type statement, then "true" equates to 1 and "false" equates to 0. If the "==" comparison is performed first, then "1 == 0" returns 0, and "dNdByte 0" also returns zero, which is seen by the "if" statement as "false", so it appears that we only ever have dashes. But what happens if we change our comparison to "1 == 1"? In this case "1 == 1" returns 1, so "dNdByte 1" will return 0 or 1 depending on the contents of the LSB, but a 1 will be seen as meaning "true", which equates to a dot, all of which explains why "== 1" does end up giving us dots and dashes, but swapped over. Phew! I tell you, I learn something new every day. So, we now have two programs offering two different ways of storing and manipulating our Morse code symbols -- how do they compare?   Comparison (code size and execution speed) The easiest comparison to make is that of code size and the amount of memory required to store any global variables, because this data is automatically reported at compile time. Not surprisingly, storing our dot-and-dash symbol data as bytes consumed less space for the global variables as illustrated in the table below. The interesting point to me is that the "storing as bytes" approach also resulted in a significant reduction in code size, even though it involves more processing when it comes to extracting the count (number of symbols) value and then extracting the dots and dashes one at a time. I'm not sure, but I'm assuming this is due to the fact that the "storing as strings" technique involves a lot of pointer de-referencing. Having said this, I really don’t have a clue, and I would welcome your thoughts on this matter. Now, as we previously noted, transmitting Morse code a human-readable speeds results in the processor sitting around twiddling its metaphorical thumbs for a large portion of the time (observe the delay() function calls scattered throughout the two programs like confetti). On this basis, we really don’t care how efficient our two approaches are. On the other hand, it's always interesting to know the nitty-gritty facts, so I decided to investigate. First of all, I commented out all of the delay() function calls, because these would totally overwhelm the amount of time spent doing the real work. Next, I declared two unsigned long variables called "time1" and "time2", and then I modified the loop() function in both programs to read as follows: void loop() {   time1 = micros();   displayThis("Hello World");   time2 = micros();   Serial.print("Time = ");   Serial.println(time2 - time1); } FYI, I'm using an Arduino Uno for this project and micros() is a built-in function provided by the Arduino IDE. This function returns the number of microseconds since the Arduino began running the current program. This number will overflow and return to zero after approximately 70 minutes, which is more than enough time for my proposes. Also, in 16 MHz Arduinos like the Uno, this function has a resolution of four microseconds (i.e., the value returned is always a multiple of four). I started by comparing my original "Hello World" strings, but then I thought that this really wasn't fair because some of the time was being spent converting lowercase characters to their uppercase equivalents, which isn’t really the point of the exercise. Thus, I also ran the comparison using strings comprising all of the alphanumeric characters:   ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 And, just to make sure, I performed a third comparison using a longer string comprising two copies of all the alphanumeric characters. The results of all these comparisons are shown in the table below. As we see, the "storing as bytes" approach ends up being faster than its "storing as strings" equivalent, even though it involves more processing when it comes to unpacking the count and symbol data. Once again, I'm assuming that this is due to the fact that the "storing as strings" technique involves a lot of pointer de-referencing. And, once again, I really don’t have a clue, and I would welcome your thoughts on this matter. Now, with regard to the other techniques folks suggested -- like the binary tree and the run-length encoding -- it would be great if you wanted to take my original "store as strings" program as the basis and modified it to use these other techniques. If you then emailed me your code at max.maxfield@ubm.com , I could run it on my Arduino and compare it against the two schemes discussed here.   In the meantime, should I go with "storing as strings" approach because this is easier to understand and maintain, or should I opt for the "storing as bytes" technique on the basis that the end user doesn’t need to know what's going on under the hood, and this method is slightly faster and -- more importantly -- uses less memory?
  • 热度 14
    2015-8-17 17:22
    2069 次阅读|
    0 个评论
    Several days ago I posted this blog describing how I wish to use a light-emitting diode LED to flash messages in Morse code. In these columns I inquired as to the best way to represent, store, and access the Morse code dots and dashes used to represent letters and numbers.   As you can see from the image above, the alphanumeric characters 'A' to 'Z' and '0' to '9' each require between one and five dot-dash symbols. What I neglected to mention was that I also wish to have two punctuation symbols at my disposal, each of which contains six symbols: ',' (comma = "--..--") and '.' (period = ".-.-.-").   Of course, you might say: "Does anyone still know Morse code these days?" Well, my chum Ivan from the next bay just wandered into my office, looked at a little proto-shield I threw together last weekend flashing away merrily on my desk and said "Oh, it's flashing 'Hello World'" (take a look at this video of the proto-shield doing its funky stuff).   Alternatively, you might ask: "Does anyone actually need to know Morse code these days." To these sad souls I would respond: "Hello! Didn’t you see the film Independence Day ?" Then, as they sheepishly sidled away, I would call out: "Go away, or I shall taunt you a second time!"   But we digress... My original columns unleashed a maelstrom of comments. Community member Cdhmanning suggested using a binary decision tree, in which each node has a character and a link to the left for dot and one to the right for dash. I'd love to try this, but I fear it's beyond my limited programming skills. In fact, in one of his comments, Betajet presented the code for this; however, I'm still trying to wrap my bumbling brain around this little scamp (the code, not Betajet).   Someone else emailed me proposing a form of run-length encoding. Sad to say, however, I couldn’t wrap my brain around that one either ("I'm a bear of little brain," as Winnie-the-Pooh would say).   Several people advocated using two bits per symbol. My chum David Ashton, for example, suggested 01=dot, 10=dash, 11=space, and 00=end. David also noted that, using this scheme, a 5-symbol code would require 20 bits (5 dots/dashes x 2 bits, plus 4 inter-symbol spaces x 2 bits, plus 1 end character x 2 bits).   Actually, just a few minutes ago whilst I was in the process of writing this column, I received an email from Djones A. Boni in Brazil, who suggested the following scheme, which is very similar to David's:   We actually have four different symbols: The dot (+1 implicit space between parts) The dash (+1 implicit space between parts) The space between letters (-1 implicit space between parts) The space between words (-1 implicit space between parts) We can encode them as 0b00, 0b01, 0b10 and 0b11, thereby getting four symbols into a byte.   Another community member, Jacob Larsen, suggested using 0 and 1 bits to represent units of light and darkness, so the character 'A' (".-") would be represented as 10111; i.e., dot (1) = LED On = one unit delay; inter-symbol space (0) = LED Off = one unit delay; and dash (111) = LED On = three unit delays. On the one hand, this has the advantage of being relatively simple; on the other hand, the numeric character '0' with five dashes ("-----") would occupy 5 x (3 + 1) = 20 bits, which seems a tad enthusiastic, as it were.   Another proposal from Cdhmanning was to store each Morse code character in an 8-bit byte. The three MSBs (most-significant bits) would be used to represent the number of bits in the character, while the five LSBs (least-significant bits) would be used to represent the dots and dashes.   Consider the character 'F' ("..-."), for example. This could be represented as 100x0010, where the 100 indicates that this character contains four dot-dash symbols and the 0010 indicates dot-dot-dash-dot. Meanwhile, the 'x' indicates that we don’t care about this character (we would set it to 0 in the code).   In fact, Cdhmanning, actually recommended that the dot-dash symbol bits be "swapped over" such that the first to be transmitted is stored in the LSB, because this will make any downstream processing easier. In this case, our 'F' character would actually be stored as 100x0100.   As I mentioned above, I also wish to have two punctuation symbols at my disposal, each of which contains six symbols: ',' (comma = "--..--") and '.' (period = ".-.-.-"). Happily, Cdhmanning's scheme can be adapted to accommodate this as shown below, where a 'x' character indicates "don't care" (we'll set these to 0 in the code) and a '?' character indicates a symbol (0 = dot, 1 = dash):   // 000 xxxxx 0 symbols (not used) // 001 xxxx? 1 symbol // 010 xxx?? 2 symbols // 011 xx??? 3 symbols // 100 x???? 4 symbols // 101 ????? 5 symbols // 11 ?????? 6 symbols   Jacob Larsen also commented: "You don't mention what you are optimizing for -- code size, code maintainability, data size, or instruction speed? I am going to assume maintainability, since Morse code generally doesn't go fast enough to need speed optimization."   Jacob makes a very good point. Transmitting Morse code at human-readable speeds means that -- most of the time -- the processor is sitting around twiddling its metaphorical thumbs, so speed optimization isn’t really an issue. On the other hand, it's always a good idea to minimize your code and data size where possible, and it's also gratifying to make your code run as efficiently and as quickly as possible, even if no one else really gives a hoot. For this particular project, I feel that understandability and maintainability are the two key elements. The reason for this will become clear when I rent the veils asunder and explain the true purpose behind this capriciously cunning contraption, but we'll leave that for a future column.   For the purposes of this exercise, I've decided to compare two different techniques: (a) Storing the Morse code symbols as strings, and (b) packing the symbols for each character into a single byte. Now read on...   Storing symbols as strings Before we plunge into the fray with gusto and abandon, let’s first discuss a few "housekeeping" issues. The thing about Morse code is that everything is relative, because some people can transmit it faster than others. As we know from our original Morse code chart, a dot equates to one element (unit) of delay (time) and a dash equates to three elements of delay.   Meanwhile, the inter-symbol (dot-dash) spacing is one element of delay; the inter-character spacing (i.e., the spacing between adjacent letters in a string like "Hello") is three elements of delay; and the inter-word spacing (i.e., the spacing between two words in a string like "Hello World") is seven elements of delay.   Baring this in mind, what does it actually mean when you hear people say that they can transmit 'n' words per minute in Morse code? This is obviously somewhat subjective, because different words have different numbers of characters; there's a heck of a difference between the number of characters in "A" and in "Antidisestablishmentarianism," for example.   For this reason, one of the "standard words" Morse code aficionados use to compare themselves against each other is PARIS. Consider the following breakdown, where {} represents the delay elements associated with a dot or a dash; {3} {3} {1} (3) = 14 A ".-"   {1} {3}                 (3) =  8 R ".-."  {1} {3} {1}         (3) = 10 I ".."   {1} {1}                 (3) =  3 S "..."  {1} {1} {1}         (7) = 12                                    Total = 50   One minute equals 60 seconds equals 60,000 milliseconds (ms). Thus, if we were to transmit at one word per minute -- and assuming 50 elements per word based on PARIS as discussed above -- then each element would take 1,200ms. This explains why you will see the following definition in my code:   #define elementDelay 1200   By trial and error, I've discovered that 15 words per minute is pleasing to my eye, which is why you will also see the following definition in my code:   #define WPM 15   Thus, the way in which I've implemented my code means that you only have to tweak the WPM definition in order to modify the transmission rate. And, speaking of my code, you can access the entire program by clicking here .   The interesting part of this program starts with our definition of the dots and dashes, which appears as follows:   char* dotsNdashes contains a pointer to a string containing ".-" (plus a hidden terminating '\0' null character); dotsNdashes contains a pointer to a string containing "-..."; and so forth.   Now, I want my final program to be able to display any ASCII string that comes its way. For the purpose of this example, however, my main loop is pretty simple as shown below: void loop() {   displayThis("Hello World"); } The idea is that I can pass any ASCII text string to my displayThis() function. As you'll see, the first thing we do in the displayThis() function is to convert any lowercase alpha characters 'a' to 'z' into their uppercase counterparts 'A' to 'Z'. We could just tell people that they can only use uppercase letters, but we all know that they aren’t going to listen (LOL).   You'll also see that we check for any unexpected characters and -- if we find any -- we call a simple eeekError() function that flashes the LED in a certain way (we vary the LED's on-off times to reflect different errors).   The body of the displayThis() function is used to determine which entry (string) we wish to use in our dotsNdashes array consumes a byte (char) for each dot and dash, which is a lot of memory (relatively speaking). Next, we'll consider packing all of the symbols forming a character into a single byte.     Continued at Comparing techniques for storing Morse code in C (Part 2)
  • 热度 21
    2015-3-21 11:23
    1775 次阅读|
    0 个评论
    There are few things more dispiriting to an engineer than pouring their heart, sweat and tears into a project only to have it fail. Failure can and does provide insights and growth experiences to those involved but the loss of time and effort can strike a devastating blow. There are many reasons that an embedded systems project can fail but there are seven key indicators that a project is dying a slow and silent death.   #7 – Team turnover Every company experiences employee or contractor turn over but excessive turnover of key personal can be a leading indicator that a project is doomed for failure. There are many reasons why turnover can have a detrimental effect on the project. First, it has a psychological effect on other team members that can decrease productivity. Second, the loss of key personal can result in historical and critical information being lost forever, which will slow down the development. Finally, replacing team members requires training and bringing up to speed a new team member. This can be a distraction that takes others away from development work, with the end result an increase in development costs and delivery timeframe.   #6 – Go stop go syndrome There is an old saying that children are taught; “Don’t cry wolf.” The saying is a warning to not raise false alarms. This warning is ignored in projects that have a “GO! STOP! GO!” cycle. A manager, client, or some other entity pushes his team hard, claiming that the project has to get out the door by a certain date. Developers work weekends and put in extra effort. Then, just as quickly as the big push came the project is stopped dead in its tracks. Months later it is once again an emergency. “Hurry we have to ship by X!" And the same thing happens again.   The repeated urgency followed by stopping the project that is later urgently started again has a psychological effect on the development team. The developers come to no longer believe that there is any urgency. In fact, they start to get the mindset that this project isn’t a serious project and that it will very shortly be stopped again, so why put in any effort at all?   Watch out for the project that cries wolf!   #5 – A perfectionist attitude One of my favorite phrases concerning engineers is “There comes a time in every project when you must shoot the engineers and start production." Many engineers have a perfectionist attitude. The problem with this attitude is that it is impossible to build the perfect system, write the perfect code, or launch the product at the perfect time. Perfectionism is always elusive and if perfectionism is part of the culture it is a sign that a product may be re-engineered out of business.   The right mindset isn’t perfectionism but success. What is the minimum success criterion in order to successfully launch the product? Set the criteria for success and launch the product once that is achieved. A boot-loader can later be used to add features and resolve any minor bugs.   #4 – Accelerated timetable It seems counter-intuitive, but to develop an embedded system rapidly a team actually needs to go slow. Working on an accelerated timetable results in decreased performance due to stress and, more importantly, a higher likelihood that mistakes will be made. Mistakes will directly influence the number of bugs that then increase test time and rework time.   Another issue is that when developers are rushing and trying to meet an accelerated timetable, they cut corners. Code doesn’t get commented. Design documents such as architecture and flowcharts aren’t created. Instead, design occurs on the fly in the programmer's mind. Going slower and doing things right will get to the end solution faster.   #3 – Poorly architected software Embedded software is the blood of the embedded system; without it nothing works. Poorly architected software is a sure sign of failure. The architecture of an embedded system needs to have flexibility for growth. It needs to have hooks for testing, debugging, and logging. A poorly architected system will lead to a poor implementation and that will result in buggy, unmanageable software that is doomed to spend its days being debugged until the project finally dies.   #2 – Putting the cart before the horse Developing a new product is an exciting endeavor. There is a lot to do and companies are usually in a hurry to get from concept to production. This hurry can be extremely dangerous, especially when production decisions start to get ahead of themselves.   A great example is when the product's mechanical design or look and feel is being used to drive its electrical requirements. Before a working electrical and software prototype is ever proven, production tools get kicked off. In these cases it always seems that the circuit board doesn’t check out, adjustments need to be made, and – oops -- production plastic tools no longer fit the circuit board. The horse of the system needs to be pulling the cart. Projects that rush and try to pull things in parallel too quickly usually end up taking longer and costing more due to revisions.   #1 – Scope creep Every project has scope creep, but the degree of the scope creep can be the determining factor as to whether the project will succeed or fail. One of the most dangerous aspects of scope creep is that it can be insidious. The addition of a simple sensor to the board one day, a few other additions s a few months later, these seem completely harmless. But they can be deadly.   The biggest problem with scope creep is that the changes are usually subtle. At first glance a change looks like it’s just a few day's work. But with each addition the system's complexity rises. Complex systems require more testing and possibly more debugging. The scope creep can thus change the system to such a degree over time that the original software architecture and design become obsolete or even the incorrect solution! The end result is a project that is far outside its budget and behind its delivery date, with little to no end in sight.   Conclusion There are no guarantees for success when developing a new embedded system and there are many factors that contribute its success or failure. These are what I have identified the top seven silent project killers. These are subtle clues that can indicate your project is on a slow and silent death trajectory.   What other indicators do you think might be signs that a project may never reach completion? Do any projects you are working on right now exhibit more than one of these listed here?   Jacob Beningo is a Certified Software Development Professional (CSDP) whose expertise is in embedded software. He works with companies to decrease costs and time to market while maintaining a quality and robust product. He is an avid tweeter, a tip and trick guru, a homebrew connoisseur and a fan of pineapple!
  • 热度 22
    2014-5-9 14:13
    1310 次阅读|
    1 个评论
    There are engineers and programmers who insist on marching to their own peculiar drums. Sometimes that can be a good thing, but for managers it can be a complete nightmare.   One of my team members was a very quirky character, which also translated into his style of coding.   On one project, we were writing a somewhat large embedded code in ARM, with more than 60 modules. Instead of adhering to our common style, he insisted on doing it his way.   At a certain point, I gave him the task to design several complex data entry screens, with a particular "look and feel" to communicate over our dumb terminal interface.   After leaving him at it for a month, I went over his code to peer review it. I was absolutely astonished. All his screens -- very complex, context-sensitive screens -- were contained in a single printf() statement.   He wrote the longest printf call I had ever seen in my entire life. The line was 1,450 characters long!   It was constructed with ternary operators like {"%s%s%s%s.....%s%s", a 2 ? b c ? "this" : "that" : f == 0 ? ......}. Just imagine a line with more than 8 layers of logic, over 60 extremely complex ternary operators, spanning over 1,400 characters in a single printf() evaluation.   It worked, but I presumed that it would be impossible to maintain. I trashed his work and asked him to redesign it from scratch.   It was very tense, and we argued intensely, but he ended up doing it right in the end. He was a brilliant engineer, actually.   Jonny Doin is CEO of GridVortex Systems. He has over 25 years experience working with embedded systems RD and hardware and firmware design. He has worked on medical, broadcast video, industrial, and energy applications and founded GridVortex in 2012 to design and deploy IoT intelligent networks for large-area urban projects and embedded micro-clouds.
  • 热度 22
    2013-8-27 20:13
    1815 次阅读|
    0 个评论
    The blog series by Harry Foster of Mentor Graphics contains lots of really valuable information about trends in functional verification. The studies he discusses are very useful in helping me track how the industry is evolving. Being able to answer questions such as how much time engineers spend performing verification and which languages are being adopted the most can ensure that engineers get the right tools for the future. However, Foster's blog a few weeks ago immediately raised my eyebrows—not because of what it said, but because of what it didn't say. This chart from the 2012 Wilson Research Group study shows adoption trends from 2007 and 2012. One would think that technologies such as code coverage, functional coverage, and assertions were being adopted rapidly. Oops. That's not quite the case. In the other blogs in this series, Foster had been comparing results from the 2012 study with results from 2010. To me, the switch to a comparison with 2007 results seemed highly suspicious. Unluckily for Foster, the Internet is persistent. This graph shows results from a 2010 study. Let me turn those two charts into one.   Code coverage dropped 2 per cent in two years. The use of assertions dropped 6 per cent, and use of functional coverage dropped 6 per cent. Mentor claimed the overall confidence level of the 2010 study was 95 per cent with a margin of error of 4.1 per cent. For 2012 study, the overall confidence level was 95 per cent with a margin of error of 4.05 per cent, so the differences between 2010 and 2012 are basically in the noise. Rather than dealing with the small but declining percentages as signs of a maturation and potential saturation of the industry for constrained random test-pattern generation, the blogs attempted to paint a rosy picture of adoption. This flattening of adoption is an important trend and actually supports the types of things that I hear real engineers telling me. I hear them talking about the increased difficulties associated with creating functional coverage points, not being able to use constrained random for SoC-level verification, and frustration with assertions. These numbers indicate, not mass migration, but that all is not well, and that EDA vendors need to be looking in other directions for their next generation of tools. Is your use of any of these technologies changing? Perhaps you know of other reasons why these numbers have become flat. Brian Bailey EE Times  
相关资源