tag 标签: integer

相关博文
  • 热度 14
    2014-11-12 17:15
    2433 次阅读|
    0 个评论
    A young colleague was trying to make a measurement using an analogue-to-digital converter (ADC) and convert this measurement into a voltage to be displayed. He was playing around with the floating-point functions of the compiler. I observed that there was no real need to use floating-point data or math operations. This suggested to me that a brief recap on integer arithmetic would not go amiss. Back in the day, when the 2 kbyte 8048 microcontroller was the latest and greatest thing on the market, and when you were hand-coding or using an assembler (if you were lucky), memory space and execution speed were very high on your list of budgetary limitations. If you had to perform any mathematical processing, you had to consider your approach very carefully. Firstly, there were no standard libraries readily available. Intel did have one called "Insight" (or maybe "Insite"), but it was expensive to join and you were (or at least, I was) reduced to scouring application notes and design ideas to find that ideal 16x16 multiplication routine. Oftentimes you would have to write your own. Today some of these really smart tricks appear in books like Hacker's Delight by Henry S. Warren, Jr. If you really want to be impressed, you should look at the work done by Jack Crenshaw in his book Math Toolkit for Real-Time Programming . Today the integer arithmetic functions are either included in the compiler or there are many resources available with the routines, so you don't have to go to the very basics, but you can still avail yourself of the early techniques. Let me state here and now that I am using the term "integer arithmetic" to cover the mathematical concept of integer and not the C compiler int data-type declaration. As we will see, the data types will have to be selected to suit our ends. Let's assume that you want to read a 12bit ADC with a span of 5V. The voltage would be simply given by (N/(N max —N min ))*V ref , where N is the current reading, N max is the reading at the maximum voltage, N min is the reading at the minimum voltage, and V ref is—of course—the 5V of the ADC reference. As a starting point, we assume the full range of the ADC conversion; hence, N max = 4,095 and N min = 0. Thus the voltage is given by (N/4,095)*5. With a floating-point calculation, the order of calculation is not significant (at least at the first approximation), but you can still perform this calculation with integer arithmetic and get a respectable result, if you think about what you are doing. Let's assume the number N that we measure is 1,234, so—using a calculator—the result is 1.507V. However, if you performed this using integer math with the operations in the order they are written, the integer division 1,234/4,095 will return a zero (any remainder is "thrown away"), and any product involving one or more zeros will result in a zero. However, if we choose to first multiply the numerator with V ref , this will yield an interim result of (1,234*5) = 6,170. You should have been taught that the number of bits needed to represent a product in binary is the sum of the number of bits of each multiplicand. You need to make sure that the interim variable is correctly declared to handle the maximum number of bits in the result. If we now perform an integer divide by 4,095, we will obtain a result of 1. This is closer to the true answer, but definitely "no cigar." Instead of using 5V, why don't we work in mV and use 5,000mV in place of 5V? We will always know where the decimal point goes, so we can post-process the result to display the correct answer. The only implication is that the interim variable must be able to hold a large enough number. The initial product would be 6,170,000 (at 20 bits, this would be a long data type in your typical C compiler), but the result would be 1,506mV (and, with some additional software, we could round this up). All of this requires only a little pre-preparation. Remember that the number of bits needed to represent the result of the division is reduced; also that you may be able to preserve memory space by truncating the result to a suitable data type. If you are still with me, then the next bit (pun unintended) is nothing more than an extension. You must know that, in the real world, the ranges of operation never match the full range of the ADC as a result of offsets and/or design. In the industrial world, a common signal is the 4-20mA loop—that is, the minimum signal is 4mA and the maximum is 20mA (a range of 16mA). This is frequently read across a 249Ω resistor, so the ADC sees a number corresponding to 0.996V for the minimum and 4.980V for the maximum. In these circumstances you normally calibrate your inputs using a calibrator to generate a known 4mA (I min ) and 20mA (I max ). You take the reading at 4mA and save it (preferably in EEPROM) as N min , while the reading at 20mA gives N max . By proportion on a straight line graph, you can calculate the current using the following: You can work this out each time you need to know the current, but there are some calculations that are constant for a given set of calibration constants, and you can save these in non-volatile storage at the same time as you calibrate. For example, (I max —I min ) will always be 16mA and can be considered to be a constant, as can I min . Similarly, (N max —N min ) can be calculated and saved at calibration. I would recommend that you scale the milliamps to microamps, and no floating-point data or math operations are required to find the result. Sometimes you can be even smarter. Remember that a binary division by 2 is simply a shift to the right by one bit, while a multiplication by 2 is a 1bit shift to the left. Multiples of two simply require more shift operations (or a single multi-bit shift). Since we control the current calibration by using our own calibrator, what if we were to select I max as 12.192mA and I min as 4.000mA. In this case, the difference would be 8192µA, which is a simple 13bit shift to the right as a division. Many times, you will have to process the incoming data and translate to some outgoing signal. As an example, I often have to process a 12bit 4-20mA input signal to a 10bit 4-20mA result on the output (with its calibration constants as well). I find the easiest way is to scale the input to a percentage of the full scale and to then work based on the full scale all the way through to the output. Since the percentage is not normally accurate enough, I work with 1,000 "permillage" as it were. If you are thoughtful enough, you may be able to scale this to 1,024 for faster processing. The bottom line is that careful thought and planning will enable you to write compact programs that execute speedily and provide accurate results without the need for floating-point math. This is, however, at the expense of generality of approach, but "You gotta do what you gotta do!" Are there any tricks ad tips that you use and would care to share? If so, please post them below as comments. Aubrey Kagan is Engineering Manager at Emphatec.  
  • 热度 13
    2014-3-27 13:57
    1605 次阅读|
    0 个评论
    I was talking to a young colleague who was trying to make a measurement using an analogue-to-digital converter (ADC) and convert this measurement into a voltage to be displayed. He was playing around with the floating-point functions of the compiler. I observed that there was no real need to use floating-point data or math operations. This suggested to me that a brief recap on integer arithmetic would not go amiss. Back in the day, when the 2 kbyte 8048 microcontroller was the latest and greatest thing on the market, and when you were hand-coding or using an assembler (if you were lucky), memory space and execution speed were very high on your list of budgetary limitations. If you had to perform any mathematical processing, you had to consider your approach very carefully. Firstly, there were no standard libraries readily available. Intel did have one called "Insight" (or maybe "Insite"), but it was expensive to join and you were (or at least, I was) reduced to scouring application notes and design ideas to find that ideal 16x16 multiplication routine. Oftentimes you would have to write your own. Today some of these really smart tricks appear in books like Hacker's Delight by Henry S. Warren, Jr. If you really want to be impressed, you should look at the work done by Jack Crenshaw in his book Math Toolkit for Real-Time Programming . Today the integer arithmetic functions are either included in the compiler or there are many resources available with the routines, so you don't have to go to the very basics, but you can still avail yourself of the early techniques. Let me state here and now that I am using the term "integer arithmetic" to cover the mathematical concept of integer and not the C compiler int data-type declaration. As we will see, the data types will have to be selected to suit our ends. Let's assume that you want to read a 12bit ADC with a span of 5V. The voltage would be simply given by (N/(N max —N min ))*V ref , where N is the current reading, N max is the reading at the maximum voltage, N min is the reading at the minimum voltage, and V ref is—of course—the 5V of the ADC reference. As a starting point, we assume the full range of the ADC conversion; hence, N max = 4,095 and N min = 0. Thus the voltage is given by (N/4,095)*5. With a floating-point calculation, the order of calculation is not significant (at least at the first approximation), but you can still perform this calculation with integer arithmetic and get a respectable result, if you think about what you are doing. Let's assume the number N that we measure is 1,234, so—using a calculator—the result is 1.507V. However, if you performed this using integer math with the operations in the order they are written, the integer division 1,234/4,095 will return a zero (any remainder is "thrown away"), and any product involving one or more zeros will result in a zero. However, if we choose to first multiply the numerator with V ref , this will yield an interim result of (1,234*5) = 6,170. You should have been taught that the number of bits needed to represent a product in binary is the sum of the number of bits of each multiplicand. You need to make sure that the interim variable is correctly declared to handle the maximum number of bits in the result. If we now perform an integer divide by 4,095, we will obtain a result of 1. This is closer to the true answer, but definitely "no cigar." Instead of using 5V, why don't we work in mV and use 5,000mV in place of 5V? We will always know where the decimal point goes, so we can post-process the result to display the correct answer. The only implication is that the interim variable must be able to hold a large enough number. The initial product would be 6,170,000 (at 20 bits, this would be a long data type in your typical C compiler), but the result would be 1,506mV (and, with some additional software, we could round this up). All of this requires only a little pre-preparation. Remember that the number of bits needed to represent the result of the division is reduced; also that you may be able to preserve memory space by truncating the result to a suitable data type. If you are still with me, then the next bit (pun unintended) is nothing more than an extension. You must know that, in the real world, the ranges of operation never match the full range of the ADC as a result of offsets and/or design. In the industrial world, a common signal is the 4-20mA loop—that is, the minimum signal is 4mA and the maximum is 20mA (a range of 16mA). This is frequently read across a 249Ω resistor, so the ADC sees a number corresponding to 0.996V for the minimum and 4.980V for the maximum. In these circumstances you normally calibrate your inputs using a calibrator to generate a known 4mA (I min ) and 20mA (I max ). You take the reading at 4mA and save it (preferably in EEPROM) as N min , while the reading at 20mA gives N max . By proportion on a straight line graph, you can calculate the current using the following: You can work this out each time you need to know the current, but there are some calculations that are constant for a given set of calibration constants, and you can save these in non-volatile storage at the same time as you calibrate. For example, (I max —I min ) will always be 16mA and can be considered to be a constant, as can I min . Similarly, (N max —N min ) can be calculated and saved at calibration. I would recommend that you scale the milliamps to microamps, and no floating-point data or math operations are required to find the result. Sometimes you can be even smarter. Remember that a binary division by 2 is simply a shift to the right by one bit, while a multiplication by 2 is a 1bit shift to the left. Multiples of two simply require more shift operations (or a single multi-bit shift). Since we control the current calibration by using our own calibrator, what if we were to select I max as 12.192mA and I min as 4.000mA. In this case, the difference would be 8192µA, which is a simple 13bit shift to the right as a division. Many times, you will have to process the incoming data and translate to some outgoing signal. As an example, I often have to process a 12bit 4-20mA input signal to a 10bit 4-20mA result on the output (with its calibration constants as well). I find the easiest way is to scale the input to a percentage of the full scale and to then work based on the full scale all the way through to the output. Since the percentage is not normally accurate enough, I work with 1,000 "permillage" as it were. If you are thoughtful enough, you may be able to scale this to 1,024 for faster processing. The bottom line is that careful thought and planning will enable you to write compact programs that execute speedily and provide accurate results without the need for floating-point math. This is, however, at the expense of generality of approach, but "You gotta do what you gotta do!" Are there any tricks ad tips that you use and would care to share? If so, please post them below as comments. Aubrey Kagan is Engineering Manager at Emphatec.  
相关资源