tag 标签: arithmetic

相关博文
  • 热度 14
    2014-11-12 17:15
    2487 次阅读|
    0 个评论
    A young colleague was trying to make a measurement using an analogue-to-digital converter (ADC) and convert this measurement into a voltage to be displayed. He was playing around with the floating-point functions of the compiler. I observed that there was no real need to use floating-point data or math operations. This suggested to me that a brief recap on integer arithmetic would not go amiss. Back in the day, when the 2 kbyte 8048 microcontroller was the latest and greatest thing on the market, and when you were hand-coding or using an assembler (if you were lucky), memory space and execution speed were very high on your list of budgetary limitations. If you had to perform any mathematical processing, you had to consider your approach very carefully. Firstly, there were no standard libraries readily available. Intel did have one called "Insight" (or maybe "Insite"), but it was expensive to join and you were (or at least, I was) reduced to scouring application notes and design ideas to find that ideal 16x16 multiplication routine. Oftentimes you would have to write your own. Today some of these really smart tricks appear in books like Hacker's Delight by Henry S. Warren, Jr. If you really want to be impressed, you should look at the work done by Jack Crenshaw in his book Math Toolkit for Real-Time Programming . Today the integer arithmetic functions are either included in the compiler or there are many resources available with the routines, so you don't have to go to the very basics, but you can still avail yourself of the early techniques. Let me state here and now that I am using the term "integer arithmetic" to cover the mathematical concept of integer and not the C compiler int data-type declaration. As we will see, the data types will have to be selected to suit our ends. Let's assume that you want to read a 12bit ADC with a span of 5V. The voltage would be simply given by (N/(N max —N min ))*V ref , where N is the current reading, N max is the reading at the maximum voltage, N min is the reading at the minimum voltage, and V ref is—of course—the 5V of the ADC reference. As a starting point, we assume the full range of the ADC conversion; hence, N max = 4,095 and N min = 0. Thus the voltage is given by (N/4,095)*5. With a floating-point calculation, the order of calculation is not significant (at least at the first approximation), but you can still perform this calculation with integer arithmetic and get a respectable result, if you think about what you are doing. Let's assume the number N that we measure is 1,234, so—using a calculator—the result is 1.507V. However, if you performed this using integer math with the operations in the order they are written, the integer division 1,234/4,095 will return a zero (any remainder is "thrown away"), and any product involving one or more zeros will result in a zero. However, if we choose to first multiply the numerator with V ref , this will yield an interim result of (1,234*5) = 6,170. You should have been taught that the number of bits needed to represent a product in binary is the sum of the number of bits of each multiplicand. You need to make sure that the interim variable is correctly declared to handle the maximum number of bits in the result. If we now perform an integer divide by 4,095, we will obtain a result of 1. This is closer to the true answer, but definitely "no cigar." Instead of using 5V, why don't we work in mV and use 5,000mV in place of 5V? We will always know where the decimal point goes, so we can post-process the result to display the correct answer. The only implication is that the interim variable must be able to hold a large enough number. The initial product would be 6,170,000 (at 20 bits, this would be a long data type in your typical C compiler), but the result would be 1,506mV (and, with some additional software, we could round this up). All of this requires only a little pre-preparation. Remember that the number of bits needed to represent the result of the division is reduced; also that you may be able to preserve memory space by truncating the result to a suitable data type. If you are still with me, then the next bit (pun unintended) is nothing more than an extension. You must know that, in the real world, the ranges of operation never match the full range of the ADC as a result of offsets and/or design. In the industrial world, a common signal is the 4-20mA loop—that is, the minimum signal is 4mA and the maximum is 20mA (a range of 16mA). This is frequently read across a 249Ω resistor, so the ADC sees a number corresponding to 0.996V for the minimum and 4.980V for the maximum. In these circumstances you normally calibrate your inputs using a calibrator to generate a known 4mA (I min ) and 20mA (I max ). You take the reading at 4mA and save it (preferably in EEPROM) as N min , while the reading at 20mA gives N max . By proportion on a straight line graph, you can calculate the current using the following: You can work this out each time you need to know the current, but there are some calculations that are constant for a given set of calibration constants, and you can save these in non-volatile storage at the same time as you calibrate. For example, (I max —I min ) will always be 16mA and can be considered to be a constant, as can I min . Similarly, (N max —N min ) can be calculated and saved at calibration. I would recommend that you scale the milliamps to microamps, and no floating-point data or math operations are required to find the result. Sometimes you can be even smarter. Remember that a binary division by 2 is simply a shift to the right by one bit, while a multiplication by 2 is a 1bit shift to the left. Multiples of two simply require more shift operations (or a single multi-bit shift). Since we control the current calibration by using our own calibrator, what if we were to select I max as 12.192mA and I min as 4.000mA. In this case, the difference would be 8192µA, which is a simple 13bit shift to the right as a division. Many times, you will have to process the incoming data and translate to some outgoing signal. As an example, I often have to process a 12bit 4-20mA input signal to a 10bit 4-20mA result on the output (with its calibration constants as well). I find the easiest way is to scale the input to a percentage of the full scale and to then work based on the full scale all the way through to the output. Since the percentage is not normally accurate enough, I work with 1,000 "permillage" as it were. If you are thoughtful enough, you may be able to scale this to 1,024 for faster processing. The bottom line is that careful thought and planning will enable you to write compact programs that execute speedily and provide accurate results without the need for floating-point math. This is, however, at the expense of generality of approach, but "You gotta do what you gotta do!" Are there any tricks ad tips that you use and would care to share? If so, please post them below as comments. Aubrey Kagan is Engineering Manager at Emphatec.  
  • 热度 13
    2014-3-27 13:57
    1626 次阅读|
    0 个评论
    I was talking to a young colleague who was trying to make a measurement using an analogue-to-digital converter (ADC) and convert this measurement into a voltage to be displayed. He was playing around with the floating-point functions of the compiler. I observed that there was no real need to use floating-point data or math operations. This suggested to me that a brief recap on integer arithmetic would not go amiss. Back in the day, when the 2 kbyte 8048 microcontroller was the latest and greatest thing on the market, and when you were hand-coding or using an assembler (if you were lucky), memory space and execution speed were very high on your list of budgetary limitations. If you had to perform any mathematical processing, you had to consider your approach very carefully. Firstly, there were no standard libraries readily available. Intel did have one called "Insight" (or maybe "Insite"), but it was expensive to join and you were (or at least, I was) reduced to scouring application notes and design ideas to find that ideal 16x16 multiplication routine. Oftentimes you would have to write your own. Today some of these really smart tricks appear in books like Hacker's Delight by Henry S. Warren, Jr. If you really want to be impressed, you should look at the work done by Jack Crenshaw in his book Math Toolkit for Real-Time Programming . Today the integer arithmetic functions are either included in the compiler or there are many resources available with the routines, so you don't have to go to the very basics, but you can still avail yourself of the early techniques. Let me state here and now that I am using the term "integer arithmetic" to cover the mathematical concept of integer and not the C compiler int data-type declaration. As we will see, the data types will have to be selected to suit our ends. Let's assume that you want to read a 12bit ADC with a span of 5V. The voltage would be simply given by (N/(N max —N min ))*V ref , where N is the current reading, N max is the reading at the maximum voltage, N min is the reading at the minimum voltage, and V ref is—of course—the 5V of the ADC reference. As a starting point, we assume the full range of the ADC conversion; hence, N max = 4,095 and N min = 0. Thus the voltage is given by (N/4,095)*5. With a floating-point calculation, the order of calculation is not significant (at least at the first approximation), but you can still perform this calculation with integer arithmetic and get a respectable result, if you think about what you are doing. Let's assume the number N that we measure is 1,234, so—using a calculator—the result is 1.507V. However, if you performed this using integer math with the operations in the order they are written, the integer division 1,234/4,095 will return a zero (any remainder is "thrown away"), and any product involving one or more zeros will result in a zero. However, if we choose to first multiply the numerator with V ref , this will yield an interim result of (1,234*5) = 6,170. You should have been taught that the number of bits needed to represent a product in binary is the sum of the number of bits of each multiplicand. You need to make sure that the interim variable is correctly declared to handle the maximum number of bits in the result. If we now perform an integer divide by 4,095, we will obtain a result of 1. This is closer to the true answer, but definitely "no cigar." Instead of using 5V, why don't we work in mV and use 5,000mV in place of 5V? We will always know where the decimal point goes, so we can post-process the result to display the correct answer. The only implication is that the interim variable must be able to hold a large enough number. The initial product would be 6,170,000 (at 20 bits, this would be a long data type in your typical C compiler), but the result would be 1,506mV (and, with some additional software, we could round this up). All of this requires only a little pre-preparation. Remember that the number of bits needed to represent the result of the division is reduced; also that you may be able to preserve memory space by truncating the result to a suitable data type. If you are still with me, then the next bit (pun unintended) is nothing more than an extension. You must know that, in the real world, the ranges of operation never match the full range of the ADC as a result of offsets and/or design. In the industrial world, a common signal is the 4-20mA loop—that is, the minimum signal is 4mA and the maximum is 20mA (a range of 16mA). This is frequently read across a 249Ω resistor, so the ADC sees a number corresponding to 0.996V for the minimum and 4.980V for the maximum. In these circumstances you normally calibrate your inputs using a calibrator to generate a known 4mA (I min ) and 20mA (I max ). You take the reading at 4mA and save it (preferably in EEPROM) as N min , while the reading at 20mA gives N max . By proportion on a straight line graph, you can calculate the current using the following: You can work this out each time you need to know the current, but there are some calculations that are constant for a given set of calibration constants, and you can save these in non-volatile storage at the same time as you calibrate. For example, (I max —I min ) will always be 16mA and can be considered to be a constant, as can I min . Similarly, (N max —N min ) can be calculated and saved at calibration. I would recommend that you scale the milliamps to microamps, and no floating-point data or math operations are required to find the result. Sometimes you can be even smarter. Remember that a binary division by 2 is simply a shift to the right by one bit, while a multiplication by 2 is a 1bit shift to the left. Multiples of two simply require more shift operations (or a single multi-bit shift). Since we control the current calibration by using our own calibrator, what if we were to select I max as 12.192mA and I min as 4.000mA. In this case, the difference would be 8192µA, which is a simple 13bit shift to the right as a division. Many times, you will have to process the incoming data and translate to some outgoing signal. As an example, I often have to process a 12bit 4-20mA input signal to a 10bit 4-20mA result on the output (with its calibration constants as well). I find the easiest way is to scale the input to a percentage of the full scale and to then work based on the full scale all the way through to the output. Since the percentage is not normally accurate enough, I work with 1,000 "permillage" as it were. If you are thoughtful enough, you may be able to scale this to 1,024 for faster processing. The bottom line is that careful thought and planning will enable you to write compact programs that execute speedily and provide accurate results without the need for floating-point math. This is, however, at the expense of generality of approach, but "You gotta do what you gotta do!" Are there any tricks ad tips that you use and would care to share? If so, please post them below as comments. Aubrey Kagan is Engineering Manager at Emphatec.  
  • 热度 15
    2014-2-20 17:45
    1703 次阅读|
    0 个评论
    According to this article and many others like it that cropped up recently, all kids should learn "coding" in elementary school. The argument usually claims that software is so important that everyone needs it to be productive citizens in the 21st century. By that argument, of course, we'd better teach them all molecular biology, electronic engineering, quantum mechanics, Chinese, and a hundred other subjects. Designing a curriculum means choosing, narrowing down the universe of subjects to those most important for a particular grade level. I salute any youngster who wants to learn coding or any other subject. But it's foolish to think everyone needs this skill. However, an educated, effective citizenry does need good reading skills. Basic math (at least). History. It's critical to learn to use a computer (though most kids get this at home), but designing one is much less important. From the article: "Coding teaches problem-solving, communication, and collaboration, Resnick says." Sure. And have you heard the old joke that an extroverted engineer is one whom looks at your shoes instead of his when talking to you? If coding teaches communication, why are engineers such famously poor communicators? Coding can teach problem solving. Ditto for math and the sciences. As does a carefully-taught English composition class. The latter will be of far more use to pretty much everyone than some misremembered Python syntax. Then there's the problem of confusing correlation with cause. I agree that engineers are good problem solvers. But is that because we learned to code/design? Or were we drawn to it because we are good problem solvers? Then there's this from The Huffington Post : "Coding is the new literacy... How are America's schools preparing youth for digital citizenship? Unfortunately, it remains focused on the 3Rs (reading, writing, arithmetic), while the ability to read, write, and manipulate code is quickly becoming more relevant." This argument, while trashing the three Rs, is simply incorrect. The ability to manipulate code is relevant only to that small segment of the population that needs to do so. The last thing we need is hordes of barely-competent Java people with no knowledge of software engineering tearing up a code base. As I write this, Code.orgs' welcome banner claims over 939,000,000 lines of code have been written by students. It's a meaningless metric. Perhaps they haven't mastered loops. Code.org is one of the few voices that carefully says they are not promoting coding per se; rather, they want the world to learn about computer science. Few others make such a distinction, and in talking to "civilians" over the years I feel most of them believe software engineering is all about sitting in front of a computer 8 hours a day writing code. One argument is that coding is fun (though I'd imagine only for a subset of kids) and that fun will translate into engineering as a career choice later in life. No doubt there's some truth there. Teach your kids about software, if they are interested. By all means form after-school coding clubs. Do mentor the young who are interested. But don't think coding is an educational or societal silver bullet. What's your take?  
  • 热度 21
    2013-12-17 19:14
    1836 次阅读|
    0 个评论
    You sure have had your own A-Ha moment. The Merriam-Webster dictionary defines it as "a moment of sudden realisation, inspiration, insight, recognition, or comprehension." But I'm taking it one step further— a-ha! with an exclamation point. This is a more dramatic realisation. It's the moment when you discover a great truth, when something that was complicated or unpredictable suddenly becomes clear. As engineers, I'm sure we've had many. One of my first a-ha!s came in grade school. I was a hobbyist, and I enjoyed creating things from the local Radio Shack, even though I didn't know why they worked. I had hoarded quite the collection of resistors, capacitors, tubes, and speakers. I could read the colour bands on a resistor to get its value, and I had rudimentary soldering skills, but I didn't know how to design anything. One day, my older brother, who had been a Navy technician, explained Ohm's Law to me. A lightning bolt ignited in my head. You mean, there's a relationship between voltage, current, and resistance? That makes a lot of sense. And the world became a little bit more understandable. Building a circuit was a little less about pleasing the electron gods and a little more like engineering. I've had several of these moments since then. Calculus was a key ingredient to many. Though I knew formulas from high school physics, calculus enabled me to derive the formulas. As a ham radio operator, I knew how to calculate the resonate frequency of an LC network.   But I didn't know why that was the resonant frequency. With calculus, I was able to derive it myself, and I discovered why it also explained the resonant frequency of a pendulum or a spring and mass. Calculus explained that the current through a capacitor was proportional to the first derivative of the voltage. Suddenly, first-order differential equations explained time-domain and frequency-domain phenomena. This was another a-ha! moment. I've had many since then. Boolean logic explained digital circuits to me—no longer a mystery. A more recent a-ha! moment came when I learned how WCDMA worked. In retrospect, all these things seem obvious, but I can recall the very day that each of these a-ha! moments came. Science and engineering aren't the only subjects that have created these moments. An Economist article about international trade led the reader through a simplified two-party, two-industry model, where one party had a productivity advantage for both industries, and the other party had inferior productivity for both. Much to my surprise, simple arithmetic in the example showed that both parties produced and acquired more goods than either one could have done by itself if trading could occur between them. Until I had done the math, I had assumed trade was only advantageous if each had an absolute advantage in some industry. This was the principle of comparative advantage—and another a-ha! moment for me. I remember the day I did the surprising arithmetic. What about you? I'd like to hear about your a-ha! moments. Larry Desjardin Consultant  
相关资源