tag 标签: estimation

相关博文
  • 热度 27
    2014-12-3 19:02
    2458 次阅读|
    0 个评论
    Estimating likely power consumption of a piece of silicon before building it is necessarily an approximate endeavor -- not quite divining the future by poking through chicken bones, but certainly laden with assumptions and approximations. Unfortunately, early estimates are critical to the economic and timely development of world-class designs. You can't just iterate through design, manufacture, and measurement as many times as it takes to get it right.   The gold-standard of pre-silicon estimation starts from a full physical gate-level implementation -- synthesized to final gates and placed-and-routed. Using this representation, with detailed power models for gates, extracted parasitics, and activity files from gate-level simulation, power estimates are claimed to fall within 5% of silicon measurements (with multiple caveats). However, a full implementation cycle takes time (enough time that you are not going to complete more than one a day for a ~1M gate block), and it is difficult to correlate power problems back to the RTL design. So, while an improvement over waiting for silicon, this method is still not well-suited to rapid design-measure-debug iteration.   The electronic system level (ESL) may be the best place to estimate and optimize architecture for power, but either way you have to re-estimate at the RTL (register transfer level) to get the implementation architecture right. This offers the fastest and most direct debug cycle, but with a penalty in accuracy over gate-level estimation. RTL estimation still uses the same Liberty power models used in gate-level estimation and the same simulation testbench, but takes a scientific wild-ass guess (SWAG) at what gates will be mapped to in synthesis, Vth mixes, cell drives, data path optimization, net capacitances, and more. Still, for many purposes at less aggressive nodes, this approach can provide useful guidelines to major implementation decisions.     While basic estimates are often reasonably accurate in terms of overall power, they don't typically stand up well to close examination. If you want to understand, for example, static versus dynamic power components or contributions by module or contributions of memories versus the clock tree, basic analysis can be significantly off. One way to get significant improvement is to calibrate the SWAG estimates against a fully-implemented version of an earlier generation/similar design. Unsurprisingly, mapping Vth mix, drive strengths, capacitance models, and clock trees by clock domains can significantly tighten up estimates at the detail level.   But what if you are working on a new design and you don't have prior examples to guide calibration? Or what if you want to optimize for power at the micro-architectural level -- for example, splitting high-fanout nets and pipelining? To usefully guide decisions in these cases means the estimation tool has to more closely emulate a real implementation flow. In turn, this means close correlation on cell selection module-by-module -- not just threshold mix, but also DesignWare selections. It also means close correlation on drive strengths, interconnect capacitances, and clock tree implementation, all of which require some form of physical prototyping. The trick here is to get a reasonable level of accuracy faster than you can through a full implementation cycle, so you can run through multiple design-measure-debug cycles in one day. Some approximations can be made to help achieve this speed, but your vendor still needs to provide a credible case that they can accomplish reasonable correlation with the real implementation flow.   Getting to really useful RTL power estimation is hard work. We are constantly refining correlation at the component level (static versus dynamic), at the module level, and at the detailed architectural level. If you have additional ideas or feedback on the limitations in RTL power estimation, I'd be very interested to hear them.   Bernard Murphy CTO Atrenta Inc.
  • 热度 29
    2014-12-3 19:00
    1815 次阅读|
    0 个评论
    Estimating likely power consumption of a piece of silicon before building it is necessarily an approximate endeavor -- not quite divining the future by poking through chicken bones, but certainly laden with assumptions and approximations. Unfortunately, early estimates are critical to the economic and timely development of world-class designs. You can't just iterate through design, manufacture, and measurement as many times as it takes to get it right.   The gold-standard of pre-silicon estimation starts from a full physical gate-level implementation -- synthesized to final gates and placed-and-routed. Using this representation, with detailed power models for gates, extracted parasitics, and activity files from gate-level simulation, power estimates are claimed to fall within 5% of silicon measurements (with multiple caveats). However, a full implementation cycle takes time (enough time that you are not going to complete more than one a day for a ~1M gate block), and it is difficult to correlate power problems back to the RTL design. So, while an improvement over waiting for silicon, this method is still not well-suited to rapid design-measure-debug iteration.   The electronic system level (ESL) may be the best place to estimate and optimize architecture for power, but either way you have to re-estimate at the RTL (register transfer level) to get the implementation architecture right. This offers the fastest and most direct debug cycle, but with a penalty in accuracy over gate-level estimation. RTL estimation still uses the same Liberty power models used in gate-level estimation and the same simulation testbench, but takes a scientific wild-ass guess (SWAG) at what gates will be mapped to in synthesis, Vth mixes, cell drives, data path optimization, net capacitances, and more. Still, for many purposes at less aggressive nodes, this approach can provide useful guidelines to major implementation decisions.     While basic estimates are often reasonably accurate in terms of overall power, they don't typically stand up well to close examination. If you want to understand, for example, static versus dynamic power components or contributions by module or contributions of memories versus the clock tree, basic analysis can be significantly off. One way to get significant improvement is to calibrate the SWAG estimates against a fully-implemented version of an earlier generation/similar design. Unsurprisingly, mapping Vth mix, drive strengths, capacitance models, and clock trees by clock domains can significantly tighten up estimates at the detail level.   But what if you are working on a new design and you don't have prior examples to guide calibration? Or what if you want to optimize for power at the micro-architectural level -- for example, splitting high-fanout nets and pipelining? To usefully guide decisions in these cases means the estimation tool has to more closely emulate a real implementation flow. In turn, this means close correlation on cell selection module-by-module -- not just threshold mix, but also DesignWare selections. It also means close correlation on drive strengths, interconnect capacitances, and clock tree implementation, all of which require some form of physical prototyping. The trick here is to get a reasonable level of accuracy faster than you can through a full implementation cycle, so you can run through multiple design-measure-debug cycles in one day. Some approximations can be made to help achieve this speed, but your vendor still needs to provide a credible case that they can accomplish reasonable correlation with the real implementation flow.   Getting to really useful RTL power estimation is hard work. We are constantly refining correlation at the component level (static versus dynamic), at the module level, and at the detailed architectural level. If you have additional ideas or feedback on the limitations in RTL power estimation, I'd be very interested to hear them.   Bernard Murphy CTO Atrenta Inc.
  • 热度 20
    2014-3-4 19:05
    1503 次阅读|
    1 个评论
    " Estimation is Evil " is an article recently written by the always-provocative Ron Jeffries. It is part agile manifesto, part screed, and has a lot of decent observations. Unfortunately his conclusions are all wrong. At least in the context of embedded systems, which are, of course, a lot different than most other computer projects. He starts by saying the worst time to elicit requirements is at the start of the project, since that's the point in time when we know the least about what we'll build. The second clause is true, of course. But the conclusion is wrong. Requirements, at least for most firmware projects, have to be nailed down early. We'll never get them all, and we'll never get them all right. Eliciting requirements, at least to around an 80% accuracy level, is not impossible; it's just hard. Engineering the requirements is a fundamental part of all engineering disciplines. There's a lot of data we can use to evaluate how much effort should go into up-front requirements and architecture design as a function of program size (e.g., NASA's Flight Software Complexity briefing). And there are a lot of great tools to aid us. One of the worst expressions used is "RD". There's no such thing. There's R – research – when one explores the truly unknown. And there's D – development – when one has a pretty good idea where we're going. In an R environment there cannot be a schedule and there cannot be even an expectation of success, at least in a fashion that will help the company make a profit. Basic science is a great example of R. I have seen so many projects crash because the science wasn't well understood before development started. The project gets built, but the chemistry, physics, or bio isn't quite understood so things just don't work. If, as many seem to believe, it's impossible to elicit requirements, we're not doing development. We're doing research. Maybe that's true for designing the next Facebook, but it's not true for developing a cell phone, MP3 player, or most of the embedded products that fill the world. Ron says that management typically capriciously shortens estimates, therefore scheduling is a waste of time. He then goes on to say that even a single agile sprint of a couple of weeks will miss that interval's deadline since management too often overloads the team, demanding a "stretch" goal for the iteration. Thus he oddly seems to lump agile outcomes in the same bin as traditional approaches. There will always be a tension between management's schedule needs and the reality provided by engineering. That doesn't make scheduling evil; it makes the culture—well, evil is too strong a word—but the culture is broken. His advice is to just start building something and see how it works out. "Is this a bigger project, possibly taking six months or a year? Then build in two-week iterations for a few cycles. See where you are. If it's good, go ahead. If it's not, stop." That may work in some environments, but most businesses need to operate in a businesslike fashion, and need to make a specific product that can go to market in a reasonable time frame. Ron does cite some reasons why schedules are important but dismisses them. I disagree; there are hard deadlines. Miss a launch window to Mars and you've just lost 20 months till the next one opens. Money is finite; run over budget, especially in a smaller firm, and layoffs are likely. The engineering team does not exist in some ivory tower isolation; their product needs an ad campaign, a production line, a distribution network and a lot more that has to be coordinated. Often these things need to be scheduled well in advance. Almost all of the embedded firms I've worked with do struggle with scheduling. It's impossible without pretty darn good requirements, and, as I noted, these are hard to elicit. But that doesn't mean abdicating our responsibility to get them nearly-correct. Management doesn't seem to realise that the word "estimate" does not mean "exactimate." "Estimate" implies a probability distribution function – which is never an infinitely-narrow spike centred on 10:07 AM May 15. We do a terrible job, in most cases, dealing with feature creep. Changes cost money and/or time. Unmanaged scope changes will destroy the most carefully-produced schedule. The agile community has made us think hard about traditional development approaches. Much of what they advocate makes a lot of sense. But the idea of just starting to build something and see how things go does not, at least in the embedded space. Developers sometime complain about the various dysfunctional tendencies of their bosses, especially when it comes to capricious schedules. So my question to them is "How will you behave when you're the boss?" For your boss is being pressured by his boss, and so forth up the chain. Will you have the strength to fight off the pressures from on high and protect your team?  
  • 热度 23
    2013-5-10 18:39
    3068 次阅读|
    0 个评论
    Call it Hell, Call it Heaven, it's a probable twelve to seven that the guy's only doing it for some doll— Stubby Kaye and Johnny Silver, Guys and Dolls, 1955 In this column, we will tackle parameter estimation. This discipline is based on the fact that our knowledge of the state of any real-world system is limited to the measurements we can take—measurements that are inevitably corrupted by noise. Our challenge, then, is to determine the true state of the system, based on these imperfect measurements. The general idea is to take more measurements—usually many more—than the minimum needed to determine the system state. Then you crank the data through an algorithm that mitigates the effect of noisy data. The method of least squares is inherently a batch processing sort of method, where you operate on the whole set of data items after they've all been collected. Of course, the whole point of the method of least squares is to smooth out noisy measurements. But I've never addressed the nature of the noise itself. That has to change. In this column, we're going to look noise in the eye, and deal with its nature. We'll discuss the behaviour of random processes , introducing notions like probability and probability distributions . For reasons that will become clear, we'll focus like a laser on a thing variously called the bell curve , Gaussian distribution , or normal distribution . Now, I've been dealing with problems involving the normal distribution for many decades. But to my recollection, no one ever derived it for me. They just sort of plunked it down with little or no explanation. This would usually be the place where I'd start deriving it for you, but I'm not going to do that either. The reason is the same one my professors had: The classical derivation is pretty horrible, involving power series of binomial coefficients. Instead, I'm going to take a different approach here. I'm going to wave my arms a lot, and give you enough examples to convince you that the normal distribution is not only correct, but is inevitable. The nature of noise Whenever I measure any real-world parameter, there is one thing I can be sure of: the value I read off my meter is almost certainly not the actual value of that parameter. Some of the error may be due to flaws in the instrument—scale factor and bias errors, linearisation errors, quantisation errors, temperature sensitivities, etc. Given a thorough enough calibration process (and enough patience), I can compensate for those kinds of errors. But there's one kind of error I have no chance of correcting for: the random noise that's always going to be there, in any real-world sensor. Whenever I try to measure some state variable x, I don't get its true value; I get the measured value   (1) Where n is the noise, changing randomly with time. If I take only a single measurement, I can say nothing at all about the true value of x , because I have no idea what the instantaneous value of n is. But there is a way we can make an educated guess as to the value of x : measure it more than once. That's the whole idea behind the method of least squares. Why would we calculate the parameters describing the nature of the noise—like the mean , variance , and standard deviation —if we're trying to make the noise go away? The answer is, we have this nagging suspicion that, by understanding the nature of the noise, we may be able to make a better estimate of x . At the very least, we can use these parameters to calculate error bounds for the measurements. If there's any one thing that sets the Kalman filter apart from all other approaches, it's the fact that it doesn't just maintain a running estimate of the noise parameters; it uses them to get a better estimate—an optimal one, in fact—of the state variables. That being the case, we find ourselves highly motivated to learn more about noise. Random thoughts Where does noise come from, anyhow? For our purposes, we'll say that it comes from some random process , which runs independently of the dynamic system. But what does that term mean, exactly? That question has an easy answer. A random process is one whose output is unpredictable . In fact, "unpredictable" is a very synonym for "random". So in our imagination, at least, we can envision some random process—a physical machine, if you will –that's free-running in our otherwise pristine, state-of-the-art embedded system. The random process has no apparent purpose except to muck up that system. But since that process is, by definition, unpredictable, how can we get a handle on its behaviour? One thing's for sure: Although you might get some insight using a spectrum analyser, you're not likely to learn much by watching white noise marching across the face of your oscilloscope. If we're going to learn anything at all about the noise generated by a random process, we need to be able to study its outputs, one by one. We need the equivalent of a software break point that will let us freeze the process in place. We need a single-step "randomizer" that gives us one, and only one, new value every time we push its "go" button. Fortunately, suitable candidates are all around us. As a kid, I played board games like Monopoly , Risk , or the Game of Life . Such games depend on the use of "randomizers" like coins, dice, or spinning arrows. Or, come to think of it, spinning bottles. The unpredictability adds to the enjoyment of the game. These primitive game randomizers have three things in common. - When "activated" by a flip, roll, or spin, they must eventually come to rest in a stable state, so we can read the results. - There must be more than one possible end state. Multiple values are sort of the point of the whole thing. - The results must be unpredictable. I should note that, because they are all mechanical devices, none of these gadgets are truly random. Being mechanical, they all obey Newton's laws of motion. If we could know, in advance, not only the air density, temperature, etc., but also the impulse given by the player's toss, roll, or spin, we could theoretically predict exactly what the end state would be. The thing that makes the gadgets useful as randomizers is that we can't and don't know these things. We trust the devices to be fair because we assume that no ordinary human is so skilled that he can flip a coin, and make it come up heads every time. Don't bet the farm on that last assumption. People can do remarkable things, especially if there's money to be made. Even so, we can analyse the nature of the devices by assuming that the results are both fair and random. Part 2 of this series delves further into the concept of probability.  
  • 热度 20
    2013-5-10 18:35
    2950 次阅读|
    0 个评论
    Call it Hell, Call it Heaven, it's a probable twelve to seven that the guy's only doing it for some doll— Stubby Kaye and Johnny Silver, Guys and Dolls, 1955 This column delves on parameter estimation, a discipline is based on the fact that our knowledge of the state of any real-world system is limited to the measurements we can take—measurements that are inevitably corrupted by noise. Our challenge, then, is to determine the true state of the system, based on these imperfect measurements. The general idea is to take more measurements—usually many more—than the minimum needed to determine the system state. Then you crank the data through an algorithm that mitigates the effect of noisy data. The method of least squares is inherently a batch processing sort of method, where you operate on the whole set of data items after they've all been collected. Of course, the whole point of the method of least squares is to smooth out noisy measurements. But I've never addressed the nature of the noise itself. That has to change. In this column, we're going to look noise in the eye, and deal with its nature. We'll discuss the behaviour of random processes , introducing notions like probability and probability distributions . For reasons that will become clear, we'll focus like a laser on a thing variously called the bell curve , Gaussian distribution , or normal distribution . Now, I've been dealing with problems involving the normal distribution for many decades. But to my recollection, no one ever derived it for me. They just sort of plunked it down with little or no explanation. This would usually be the place where I'd start deriving it for you, but I'm not going to do that either. The reason is the same one my professors had: The classical derivation is pretty horrible, involving power series of binomial coefficients. Instead, I'm going to take a different approach here. I'm going to wave my arms a lot, and give you enough examples to convince you that the normal distribution is not only correct, but is inevitable. The nature of noise Whenever I measure any real-world parameter, there is one thing I can be sure of: the value I read off my meter is almost certainly not the actual value of that parameter. Some of the error may be due to flaws in the instrument—scale factor and bias errors, linearisation errors, quantisation errors, temperature sensitivities, etc. Given a thorough enough calibration process (and enough patience), I can compensate for those kinds of errors. But there's one kind of error I have no chance of correcting for: the random noise that's always going to be there, in any real-world sensor. Whenever I try to measure some state variable x, I don't get its true value; I get the measured value   (1) Where n is the noise, changing randomly with time. If I take only a single measurement, I can say nothing at all about the true value of x , because I have no idea what the instantaneous value of n is. But there is a way we can make an educated guess as to the value of x : measure it more than once. That's the whole idea behind the method of least squares. Why would we calculate the parameters describing the nature of the noise—like the mean , variance , and standard deviation —if we're trying to make the noise go away? The answer is, we have this nagging suspicion that, by understanding the nature of the noise, we may be able to make a better estimate of x . At the very least, we can use these parameters to calculate error bounds for the measurements. If there's any one thing that sets the Kalman filter apart from all other approaches, it's the fact that it doesn't just maintain a running estimate of the noise parameters; it uses them to get a better estimate—an optimal one, in fact—of the state variables. That being the case, we find ourselves highly motivated to learn more about noise. Random thoughts Where does noise come from, anyhow? For our purposes, we'll say that it comes from some random process , which runs independently of the dynamic system. But what does that term mean, exactly? That question has an easy answer. A random process is one whose output is unpredictable . In fact, "unpredictable" is a very synonym for "random". So in our imagination, at least, we can envision some random process—a physical machine, if you will –that's free-running in our otherwise pristine, state-of-the-art embedded system. The random process has no apparent purpose except to muck up that system. But since that process is, by definition, unpredictable, how can we get a handle on its behaviour? One thing's for sure: Although you might get some insight using a spectrum analyser, you're not likely to learn much by watching white noise marching across the face of your oscilloscope. If we're going to learn anything at all about the noise generated by a random process, we need to be able to study its outputs, one by one. We need the equivalent of a software break point that will let us freeze the process in place. We need a single-step "randomizer" that gives us one, and only one, new value every time we push its "go" button. Fortunately, suitable candidates are all around us. As a kid, I played board games like Monopoly , Risk , or the Game of Life . Such games depend on the use of "randomizers" like coins, dice, or spinning arrows. Or, come to think of it, spinning bottles. The unpredictability adds to the enjoyment of the game. These primitive game randomizers have three things in common. - When "activated" by a flip, roll, or spin, they must eventually come to rest in a stable state, so we can read the results. - There must be more than one possible end state. Multiple values are sort of the point of the whole thing. - The results must be unpredictable. I should note that, because they are all mechanical devices, none of these gadgets are truly random. Being mechanical, they all obey Newton's laws of motion. If we could know, in advance, not only the air density, temperature, etc., but also the impulse given by the player's toss, roll, or spin, we could theoretically predict exactly what the end state would be. The thing that makes the gadgets useful as randomizers is that we can't and don't know these things. We trust the devices to be fair because we assume that no ordinary human is so skilled that he can flip a coin, and make it come up heads every time. Don't bet the farm on that last assumption. People can do remarkable things, especially if there's money to be made. Even so, we can analyse the nature of the devices by assuming that the results are both fair and random. Part 2 of this article delves further into the concept of probability.  
相关资源
  • 所需E币: 0
    时间: 2022-8-1 10:34
    大小: 1.68MB
    Limitcycle-basedexactestimationofFOPDTprocessparametersunderinput/outputdisturbances-astate-spaceapproach
  • 所需E币: 1
    时间: 2022-6-2 11:00
    大小: 480.52KB
    Improvedcriticalpointestimationusingapreloadrelay(陈国强)
  • 所需E币: 0
    时间: 2022-5-30 11:18
    大小: 381.47KB
    Estimationofparametersofunder-dampedsecondorderplusdeadtimeprocessesusingrelayfeedback
  • 所需E币: 0
    时间: 2022-5-30 11:18
    大小: 387.75KB
    EstimationofProcessDynamicsusingBiasedRelayFeedbackApproach
  • 所需E币: 0
    时间: 2022-5-30 11:19
    大小: 272.21KB
    Estimationoftheparametersofsampled-datasystemsbymeansofstochasticapproximation
  • 所需E币: 1
    时间: 2022-5-30 11:23
    大小: 472.72KB
    EXACTPARAMETERESTIMATIONFROMRELAYAUTOTUNINGUNDERSTATICLOADDISTURBANCES(DerekP.Atherton)
  • 所需E币: 0
    时间: 2022-5-11 10:43
    大小: 231.24KB
    Biascompensationbasedparameterestimationfordual-ratesampled-datasystems
  • 所需E币: 4
    时间: 2019-12-25 21:50
    大小: 124.8KB
    上传者: givh79_163.com
    TheH.264videocodingstandardprovidesconsiderablyhighercodingefficiencythanpreviousstandardsdo,whereasitscomplexityissignificantlyincreasedatthesametime.InanH.264encoder,themosttime-consumingcomponentisvariableblock-sizemotionestimation.Toreducethecomplexityofmotionestimation,anearlyterminationalgorithmisproposedinthispaper.Itpredictsthebestmotionvectorbyexaminingonlyonesearchpoint.Withtheproposedmethod,someofthemotionsearchescanbestoppedearly,andthenalargenumberofsearchpointscanbeskipped.Theproposedmethodcanworkwithanyfastmotionestimationalgorithm.ExperimentsarecarriedoutwithafastmotionestimationalgorithmthathasbeenadoptedbyH.264.Resultsshowthatsignificantcomplexityreductionisachievedwhilethedegradationinvideoqualityisnegligible.……
  • 所需E币: 5
    时间: 2019-12-25 21:43
    大小: 570.3KB
    上传者: 978461154_qq
    Efficientmotionestimationisanimportantproblembecauseitdeterminesthecompressionefficiencyandcomplexityofavideoencoder.Motionestimationcanbeformulatedasanoptimizationproblem;mostmotionestimationalgorithmsusemeansquarederror(MSE),sumofabsolutedifferences(SAD)ormaximumaposterioriprobability(MAP)astheoptimizationcriterionandapplysearch-basedtechniques(e.g.,exhaustivesearchorthree-stepsearch)tofindtheoptimummotionvector.However,mostofthesealgorithmsdonoteffectivelyutilizetheknowledgegainedinthesearchprocessforfuturesearcheffortsandhencearecomputationallyinefficient.Thispaperaddressesthisinefficiencyproblembyintroducinganadaptivemotionestimationschemethatsubstantiallyreducescomputationalcomplexitywhileyetprovidingcomparablecompressionefficiency,ascomparedtoexistingfast-searchmethods.……
  • 所需E币: 5
    时间: 2020-1-14 18:36
    大小: 303.81KB
    上传者: quw431979_163.com
    Anenhancedthreestepsearch...,AnenhancedthreestepsearchmotionestimationmethodanditsVLSIarchitecture……
  • 所需E币: 5
    时间: 2020-1-14 18:36
    大小: 361.39KB
    上传者: givh79_163.com
    AVLSIdesignforfullsearch...,AVLSIdesignforfullsearchblockmatchingmotionestimation……
  • 所需E币: 4
    时间: 2020-1-14 18:37
    大小: 438.81KB
    上传者: 978461154_qq
    VLSIarchitectureforblock-ma...,VLSIarchitectureforblock-matchingmotionestimationalgorithm……
  • 所需E币: 4
    时间: 2020-1-15 12:21
    大小: 1.87MB
    上传者: quw431979_163.com
    estimationwithapplicationst...,estimationwithapplicationstotrackingandnavigation……