tag 标签: Kalman

相关帖子
相关博文
  • 热度 18
    2013-6-4 19:49
    1981 次阅读|
    0 个评论
    One of my favourite new books is " The Information ", James Gleick's new opus. I frequently go back to this book, which is about information theory and Shannon's contributions, among others, to understanding its implications not only to engineering but to any aspect of research into the natural world. While it is technically rough going sometimes, what brings me back again and again to reading it is that it is "the biography of an idea," as one reviewer said. While he does not spare the reader by dumbing down the complex technical issues, Gleick is able to interweave this with the intellectual exploits and personal experiences of those who over the last several hundred years have contributed to our understanding. Coincidentally, as I have been reading it, I have also been making my way through " Understanding the normal distribution ", Jack Crenshaw's most recent Insight blog on the importance of the Kalman algorithm in every aspect of electrical engineering and embedded systems design. According to Wikipedia , (and Jack, of course), the Kalman filter algorithm uses a series of measurements observed over time, containing noise (random variations) and other inaccuracies, and produces estimates of unknown variables that tend to be more precise than those based on a single measurement alone. It operates recursively on streams of noisy input data to produce a statistically optimal estimate of the underlying system state and is commonly used for guidance, navigation and control of vehicles and in a wide-range of digital signal processing applications in wireless networks and MEMS sensor positioning. Jack's most recent blog is also tough going. But rewarding. Once you have read it you will know that you have learned something valuable and useful. The usefulness of this algorithm is far from over. As with Gleick's book, each article and blog I read gives me a more nuanced understanding of this powerful idea, and I would like to continue building an online "biography" of this versatile algorithm. For that I need your help with comments on the site, blogs and design articles submitted about your experiences, as well as hearing from you about interesting articles and papers you have read on this topic.
  • 热度 28
    2013-6-4 19:48
    2446 次阅读|
    0 个评论
    One of my favourite new books and one I frequently go back to is James Gleick's new opus titled " The Information ". It is about information theory and Shannon's contributions, among others, to understanding its implications not only to engineering but to any aspect of research into the natural world. While it is technically rough going sometimes, what brings me back again and again to reading it is that it is "the biography of an idea," as one reviewer said. While he does not spare the reader by dumbing down the complex technical issues, Gleick is able to interweave this with the intellectual exploits and personal experiences of those who over the last several hundred years have contributed to our understanding. Coincidentally, as I have been reading it, I have also been making my way through " Understanding the normal distribution ", Jack Crenshaw's most recent Insight blog on the importance of the Kalman algorithm in every aspect of electrical engineering and embedded systems design. According to Wikipedia , (and Jack, of course), the Kalman filter algorithm uses a series of measurements observed over time, containing noise (random variations) and other inaccuracies, and produces estimates of unknown variables that tend to be more precise than those based on a single measurement alone. It operates recursively on streams of noisy input data to produce a statistically optimal estimate of the underlying system state and is commonly used for guidance, navigation and control of vehicles and in a wide-range of digital signal processing applications in wireless networks and MEMS sensor positioning. Jack's most recent blog is also tough going. But rewarding. Once you have read it you will know that you have learned something valuable and useful. The usefulness of this algorithm is far from over. As with Gleick's book, each article and blog I read gives me a more nuanced understanding of this powerful idea, and I would like to continue building an online "biography" of this versatile algorithm. For that I need your help with comments on the site, blogs and design articles submitted about your experiences, as well as hearing from you about interesting articles and papers you have read on this topic.  
  • 热度 23
    2013-5-10 18:39
    3065 次阅读|
    0 个评论
    Call it Hell, Call it Heaven, it's a probable twelve to seven that the guy's only doing it for some doll— Stubby Kaye and Johnny Silver, Guys and Dolls, 1955 In this column, we will tackle parameter estimation. This discipline is based on the fact that our knowledge of the state of any real-world system is limited to the measurements we can take—measurements that are inevitably corrupted by noise. Our challenge, then, is to determine the true state of the system, based on these imperfect measurements. The general idea is to take more measurements—usually many more—than the minimum needed to determine the system state. Then you crank the data through an algorithm that mitigates the effect of noisy data. The method of least squares is inherently a batch processing sort of method, where you operate on the whole set of data items after they've all been collected. Of course, the whole point of the method of least squares is to smooth out noisy measurements. But I've never addressed the nature of the noise itself. That has to change. In this column, we're going to look noise in the eye, and deal with its nature. We'll discuss the behaviour of random processes , introducing notions like probability and probability distributions . For reasons that will become clear, we'll focus like a laser on a thing variously called the bell curve , Gaussian distribution , or normal distribution . Now, I've been dealing with problems involving the normal distribution for many decades. But to my recollection, no one ever derived it for me. They just sort of plunked it down with little or no explanation. This would usually be the place where I'd start deriving it for you, but I'm not going to do that either. The reason is the same one my professors had: The classical derivation is pretty horrible, involving power series of binomial coefficients. Instead, I'm going to take a different approach here. I'm going to wave my arms a lot, and give you enough examples to convince you that the normal distribution is not only correct, but is inevitable. The nature of noise Whenever I measure any real-world parameter, there is one thing I can be sure of: the value I read off my meter is almost certainly not the actual value of that parameter. Some of the error may be due to flaws in the instrument—scale factor and bias errors, linearisation errors, quantisation errors, temperature sensitivities, etc. Given a thorough enough calibration process (and enough patience), I can compensate for those kinds of errors. But there's one kind of error I have no chance of correcting for: the random noise that's always going to be there, in any real-world sensor. Whenever I try to measure some state variable x, I don't get its true value; I get the measured value   (1) Where n is the noise, changing randomly with time. If I take only a single measurement, I can say nothing at all about the true value of x , because I have no idea what the instantaneous value of n is. But there is a way we can make an educated guess as to the value of x : measure it more than once. That's the whole idea behind the method of least squares. Why would we calculate the parameters describing the nature of the noise—like the mean , variance , and standard deviation —if we're trying to make the noise go away? The answer is, we have this nagging suspicion that, by understanding the nature of the noise, we may be able to make a better estimate of x . At the very least, we can use these parameters to calculate error bounds for the measurements. If there's any one thing that sets the Kalman filter apart from all other approaches, it's the fact that it doesn't just maintain a running estimate of the noise parameters; it uses them to get a better estimate—an optimal one, in fact—of the state variables. That being the case, we find ourselves highly motivated to learn more about noise. Random thoughts Where does noise come from, anyhow? For our purposes, we'll say that it comes from some random process , which runs independently of the dynamic system. But what does that term mean, exactly? That question has an easy answer. A random process is one whose output is unpredictable . In fact, "unpredictable" is a very synonym for "random". So in our imagination, at least, we can envision some random process—a physical machine, if you will –that's free-running in our otherwise pristine, state-of-the-art embedded system. The random process has no apparent purpose except to muck up that system. But since that process is, by definition, unpredictable, how can we get a handle on its behaviour? One thing's for sure: Although you might get some insight using a spectrum analyser, you're not likely to learn much by watching white noise marching across the face of your oscilloscope. If we're going to learn anything at all about the noise generated by a random process, we need to be able to study its outputs, one by one. We need the equivalent of a software break point that will let us freeze the process in place. We need a single-step "randomizer" that gives us one, and only one, new value every time we push its "go" button. Fortunately, suitable candidates are all around us. As a kid, I played board games like Monopoly , Risk , or the Game of Life . Such games depend on the use of "randomizers" like coins, dice, or spinning arrows. Or, come to think of it, spinning bottles. The unpredictability adds to the enjoyment of the game. These primitive game randomizers have three things in common. - When "activated" by a flip, roll, or spin, they must eventually come to rest in a stable state, so we can read the results. - There must be more than one possible end state. Multiple values are sort of the point of the whole thing. - The results must be unpredictable. I should note that, because they are all mechanical devices, none of these gadgets are truly random. Being mechanical, they all obey Newton's laws of motion. If we could know, in advance, not only the air density, temperature, etc., but also the impulse given by the player's toss, roll, or spin, we could theoretically predict exactly what the end state would be. The thing that makes the gadgets useful as randomizers is that we can't and don't know these things. We trust the devices to be fair because we assume that no ordinary human is so skilled that he can flip a coin, and make it come up heads every time. Don't bet the farm on that last assumption. People can do remarkable things, especially if there's money to be made. Even so, we can analyse the nature of the devices by assuming that the results are both fair and random. Part 2 of this series delves further into the concept of probability.  
  • 热度 20
    2013-5-10 18:35
    2947 次阅读|
    0 个评论
    Call it Hell, Call it Heaven, it's a probable twelve to seven that the guy's only doing it for some doll— Stubby Kaye and Johnny Silver, Guys and Dolls, 1955 This column delves on parameter estimation, a discipline is based on the fact that our knowledge of the state of any real-world system is limited to the measurements we can take—measurements that are inevitably corrupted by noise. Our challenge, then, is to determine the true state of the system, based on these imperfect measurements. The general idea is to take more measurements—usually many more—than the minimum needed to determine the system state. Then you crank the data through an algorithm that mitigates the effect of noisy data. The method of least squares is inherently a batch processing sort of method, where you operate on the whole set of data items after they've all been collected. Of course, the whole point of the method of least squares is to smooth out noisy measurements. But I've never addressed the nature of the noise itself. That has to change. In this column, we're going to look noise in the eye, and deal with its nature. We'll discuss the behaviour of random processes , introducing notions like probability and probability distributions . For reasons that will become clear, we'll focus like a laser on a thing variously called the bell curve , Gaussian distribution , or normal distribution . Now, I've been dealing with problems involving the normal distribution for many decades. But to my recollection, no one ever derived it for me. They just sort of plunked it down with little or no explanation. This would usually be the place where I'd start deriving it for you, but I'm not going to do that either. The reason is the same one my professors had: The classical derivation is pretty horrible, involving power series of binomial coefficients. Instead, I'm going to take a different approach here. I'm going to wave my arms a lot, and give you enough examples to convince you that the normal distribution is not only correct, but is inevitable. The nature of noise Whenever I measure any real-world parameter, there is one thing I can be sure of: the value I read off my meter is almost certainly not the actual value of that parameter. Some of the error may be due to flaws in the instrument—scale factor and bias errors, linearisation errors, quantisation errors, temperature sensitivities, etc. Given a thorough enough calibration process (and enough patience), I can compensate for those kinds of errors. But there's one kind of error I have no chance of correcting for: the random noise that's always going to be there, in any real-world sensor. Whenever I try to measure some state variable x, I don't get its true value; I get the measured value   (1) Where n is the noise, changing randomly with time. If I take only a single measurement, I can say nothing at all about the true value of x , because I have no idea what the instantaneous value of n is. But there is a way we can make an educated guess as to the value of x : measure it more than once. That's the whole idea behind the method of least squares. Why would we calculate the parameters describing the nature of the noise—like the mean , variance , and standard deviation —if we're trying to make the noise go away? The answer is, we have this nagging suspicion that, by understanding the nature of the noise, we may be able to make a better estimate of x . At the very least, we can use these parameters to calculate error bounds for the measurements. If there's any one thing that sets the Kalman filter apart from all other approaches, it's the fact that it doesn't just maintain a running estimate of the noise parameters; it uses them to get a better estimate—an optimal one, in fact—of the state variables. That being the case, we find ourselves highly motivated to learn more about noise. Random thoughts Where does noise come from, anyhow? For our purposes, we'll say that it comes from some random process , which runs independently of the dynamic system. But what does that term mean, exactly? That question has an easy answer. A random process is one whose output is unpredictable . In fact, "unpredictable" is a very synonym for "random". So in our imagination, at least, we can envision some random process—a physical machine, if you will –that's free-running in our otherwise pristine, state-of-the-art embedded system. The random process has no apparent purpose except to muck up that system. But since that process is, by definition, unpredictable, how can we get a handle on its behaviour? One thing's for sure: Although you might get some insight using a spectrum analyser, you're not likely to learn much by watching white noise marching across the face of your oscilloscope. If we're going to learn anything at all about the noise generated by a random process, we need to be able to study its outputs, one by one. We need the equivalent of a software break point that will let us freeze the process in place. We need a single-step "randomizer" that gives us one, and only one, new value every time we push its "go" button. Fortunately, suitable candidates are all around us. As a kid, I played board games like Monopoly , Risk , or the Game of Life . Such games depend on the use of "randomizers" like coins, dice, or spinning arrows. Or, come to think of it, spinning bottles. The unpredictability adds to the enjoyment of the game. These primitive game randomizers have three things in common. - When "activated" by a flip, roll, or spin, they must eventually come to rest in a stable state, so we can read the results. - There must be more than one possible end state. Multiple values are sort of the point of the whole thing. - The results must be unpredictable. I should note that, because they are all mechanical devices, none of these gadgets are truly random. Being mechanical, they all obey Newton's laws of motion. If we could know, in advance, not only the air density, temperature, etc., but also the impulse given by the player's toss, roll, or spin, we could theoretically predict exactly what the end state would be. The thing that makes the gadgets useful as randomizers is that we can't and don't know these things. We trust the devices to be fair because we assume that no ordinary human is so skilled that he can flip a coin, and make it come up heads every time. Don't bet the farm on that last assumption. People can do remarkable things, especially if there's money to be made. Even so, we can analyse the nature of the devices by assuming that the results are both fair and random. Part 2 of this article delves further into the concept of probability.  
相关资源
  • 所需E币: 1
    时间: 2023-4-12 12:13
    大小: 656.15KB
    TheiteratedKalmanfilterupdateasaGauss-Newtonmethod
  • 所需E币: 1
    时间: 2023-4-4 09:54
    大小: 1.52MB
    GeneralizedKalmansmoothing-Modelingandalgorithms
  • 所需E币: 5
    时间: 2023-2-13 21:37
    大小: 3.31MB
    上传者: czd886
    基于自适应Kalman滤波的移动机器人人体目标跟随
  • 所需E币: 4
    时间: 2023-2-11 14:16
    大小: 1.29MB
    上传者: ZHUANG
    基于Kalman滤波和APNN的卫星网络节点故障定位方法
  • 所需E币: 0
    时间: 2022-9-1 19:27
    大小: 446.68KB
    BayesianKalmanFiltering,RegularizationandCompressedSampling
  • 所需E币: 0
    时间: 2022-9-1 19:25
    大小: 10.58MB
    AniterativeensembleKalmanfilterinthepresenceofadditivemodelerror
  • 所需E币: 0
    时间: 2022-9-1 19:25
    大小: 872.28KB
    AnL1-LaplaceRobustKalmanSmoother
  • 所需E币: 0
    时间: 2022-9-1 19:22
    大小: 762.34KB
    Anovelmultiple-outlier-robustKalmanfilter
  • 所需E币: 0
    时间: 2022-8-31 17:15
    大小: 868.17KB
    AnOutlier-RobustKalmanFilter
  • 所需E币: 1
    时间: 2022-5-2 11:50
    大小: 341.71KB
    上传者: ZHUANG
    一种基于Kalman滤波的双轮机器人姿态控制算法
  • 所需E币: 0
    时间: 2022-5-1 16:11
    大小: 1.24MB
    上传者: ZHUANG
    基于Kalman滤波算法的精细化农业机器人巡航策略研究
  • 所需E币: 0
    时间: 2022-5-1 16:11
    大小: 1.4MB
    上传者: ZHUANG
    基于KALMAN滤波和深度学习的机器人飞拍方法研究
  • 所需E币: 1
    时间: 2022-5-1 10:44
    大小: 309.79KB
    上传者: ZHUANG
    机器人多维力传感器标定Kalman滤波
  • 所需E币: 1
    时间: 2022-4-30 17:04
    大小: 651.33KB
    上传者: ZHUANG
    Kalman滤波算法在自平衡机器人中的应用
  • 所需E币: 4
    时间: 2019-12-26 00:32
    大小: 299.37KB
    上传者: 238112554_qq
    kalman控制摄像头的旋转……
  • 所需E币: 5
    时间: 2020-1-3 18:24
    大小: 229.33KB
    上传者: wsu_w_hotmail.com
    一种基于分块加权彩色直方图特征的目标表示方法.将图像分为部分叠加的块,分别对每块计算加权量化彩色直方图,构成直方图组.将直方图组用于基于meanshift的彩色目标跟踪系统,利用Kalman滤波估计目标状态.一种分块表示的彩色目标跟踪算法孙中森1’2,孙俊喜卜3,宋建中1(1.中国科学院长春光学精密机械与物理研究所,吉林长春130033;Z.中国释学院研究生皖,北京lO0039;3.长春理工大学,吉林长春130022)摘要:一种基于分块加权彩色盘方因特征的目标表示方法。将困像分为部分叠加的堍,分别对每块计算加权量化彩色直方图,构成直方图组。将直方图组用于基于meanshift的彩色目标跟踪系统,利用K以maIl滤波估计目标状态。关键词:爵标踉踪直方图组meanshift算法丑标跟踪是甘算机税觉领域的一个分支,它在视频权羲隧着距离麴增擒递减。监控、压缩、传输领域以及高技术武器装备方面都有重增加权值以后+目标中心位嚣的颜色褥到加强.而要的意义。一个鲁棒性好的跟踪器应该包括两部分:一其他部分的颜色特征则削弱了。为了弥补』比不足,采用是对目标特征的可靠裘示,二是对背景复杂以及遮挡具……
  • 所需E币: 3
    时间: 2020-1-13 19:24
    大小: 183.93KB
    上传者: rdg1993
    kalmanfilterforarbitrageidentificationinhighfrequencydataAROBUSTNON-LINEARMULTIVARIATEKALMANFILTERFORARBITRAGEIDENTIFICATIONINHIGHFREQUENCYDATAP.J.BOLLANDANDJ.T.CONNORLondonBusinessSchoolDepartmentofDecisionScienceSussexPlace,RegentsParkLondonNW14SAPhone-(+44)71-262-5050FAX-(+44)71-724-7875E-Mail-pbolland@lbs.lon.ac.ukE-Mail-jconnor@lbs.lon.ac.ukABSTRACTWepresentamethodologyformodellingrealworldhighfrequencyfinancialdata.Themethodologycopeswiththeerraticarrivalofdataandisrobusttoadditiveoutliersinthedataset.Arbitragepricingrelationshipsareformulatedintoalinearstatespacerepresentation.Arbitrageopportunitiesviolatethesepricingrelationshipsandareanalogoustomultivariateadditiveoutliers.Robustidentification/filteringofarbitrageopportunitiesint……
  • 所需E币: 5
    时间: 2020-1-14 14:11
    大小: 1.39MB
    上传者: 2iot
    [EBOOK]TrackingandKalmanFil...,[EBOOK]TrackingandKalmanFilteringMadeEasy1……
  • 所需E币: 3
    时间: 2020-1-14 14:10
    大小: 3.15MB
    上传者: 二不过三
    TrackingandKalmanFiltering...,TrackingandKalmanFilteringMadeEasy1……
  • 所需E币: 5
    时间: 2020-1-15 16:04
    大小: 3.54MB
    上传者: 238112554_qq
    wiley-kalmanfilteringtheory&practiceusingmatlab(2e)KalmanFiltering:TheoryandPracticeUsingMATLAB,SecondEdition,MohinderS.Grewal,AngusP.AndrewsCopyright#2001JohnWiley&Sons,Inc.ISBNs:0-471-39254-5(Hardback);0-471-26638-8(Electronic)KalmanFilteringKalmanFiltering:TheoryandPracticeUsingMATLABSecondEditionMOHINDERS.GREWALCaliforniaStateUniversityatFullertonANGUSP.ANDREWSRockwellScienceCenterAWiley-IntersciencePublicationJohnWiley&Sons,Inc.NEWYORKCHICHESTERWEINHEIMBRISBANESINGAPORETORONTOCopyright#2001byJohnWiley&Sons,Inc.Allrightsreserved.Nopartofthispublicationmaybereproduced,storedinaretrievalsystemortransmittedinanyformorbyanymeans,electronicormechanical,includinguploading,downloading,printing,decompiling,recordingoroth……