tag 标签: Algorithms

相关博文
  • 热度 23
    2015-7-31 18:40
    2271 次阅读|
    0 个评论
    Autonomous, self-driving cars are gaining a lot of attention and even hype these days, typified by the Google car, which has been undergoing extensive road trials. Depending who you ask and which pundit you follow, these driverless cars will be a reality in a few years, or delayed far into the future, or somewhere in-between. Along with the time line, their level of presumed capability will also cover wide span, from handling any situation including dense urban traffic, to perhaps only more limited cases such as open-highway driving. What's in the autonomous car and what makes it work has been covered extensively in both the less-technical as well as technical media; two of the many examples are here and here . Whatever the eventual reality of the autonomous vehicle, one thing is for sure: they will require a lot of electrical power for all those high-profile sensors – radar, sonar, vision, and LIDAR (Light Detection and Ranging) are just a few  – and even more for the less-obvious but enormous computational MIPS needed to process the huge amounts of data from them.     You may be smart, but how much electrical power you need, how you get it, and how you dissipate the resultant heat is a mystery to me.   Yet despite the many stories on these vehicles, there is one important area where I feel pretty much in the dark. All the coverage I have seen or found via web searches is about the sensors, the signal processing, the algorithms, the user interface, and the control mechanisms – but how the car powers all of the electronics is a technical mystery. I have not seen any credible block diagrams for the power-supply subsystem or even basic numbers on how much electrical power is needed. Even with low-power design, I assume it's in the multi-kilowatt range – but how many? There are additional questions, of course. How does the power subsystem look in an all electric vehicle (EV) compared to one in a hybrid (HEV) or a conventional internal-combustion (IC) design? Further, any numbers on the power needed brings the inevitable closely related question: how do you dissipate all the heat that the supply (even if it is efficient) and the loads generate? Given that today's non-autonomous vehicles are straining to supply power to all their new electronics, some automakers are looking to supplement the long-established 12-V basic battery rail with a more-efficient 42-V system, see here (déjà vu flash-back: this is actually an idea which has come and gone, but may be coming again, as seen here and here ). I have seen press releases about individual components such as MOSFETs used in autonomous vehicles, but that's looking at a tree when I want to see the forest. Do you have any insight into the power subsystem requirements, implementation details, or thermal design of autonomous vehicles?  
  • 热度 18
    2014-1-30 18:59
    1674 次阅读|
    0 个评论
    I've invited my colleague Brian Dipert, editor-in-chief of the Embedded Vision Alliance, to share his perspective on various face analysis algorithms used in embedded vision. He regularly discovers and reports on interesting embedded vision applications, some of which he discusses here. Face recognition—the technology that enables cameras (and the computers behind them) to identify people automatically, rapidly, and accurately—has become a popular topic in movies and television. Consider the 2002 blockbuster Minority Report . If you've seen it (and if you haven't, you definitely should), you might recall the scene where Tom Cruise's character, Chief John Anderton, is traversing a shopping mall. After scanning his face, sales kiosks greet him by name and solicit him with various promotions. Lest you think this is just a futuristic depiction, the British supermarket chain Tesco is now making it a reality. Plenty of other real-life face recognition implementations exist. Consider Facebook's tag suggestions, an automated system that identifies Friends' faces each time you upload a photo (a facility likely enhanced by the company's 2012 acquisition of Face.com), or Apple's iPhoto software, which automatically clusters pictures containing the same person. Don't forget the face recognition-based unlock option supported in the last few Android releases (likely enabled by Google's 2011 acquisition of Pittsburgh Pattern Recognition) and available on iOS via third-party applications. And the new Microsoft Xbox One and Sony PlayStation 4 game consoles support face recognition-based user login and interface customisation via their camera accessories (included with the Xbox One, optional with the PS4). Face recognition has made substantial progress in recent years, but it's admittedly not yet perfect. Some of its limitations are due to an insufficiently robust database of images. And some of its limitations are a result of algorithms not yet able to compensate fully for things like off-centre viewing angles, poor lighting, or subjects who are wearing hats or sunglasses or sport new facial hair or makeup. Ironically, face recognition's inability to identify people with guaranteed reliability provides privacy advocates with solace. However, other face analysis technologies are arguably more mature, enabling a host of amazing applications, and they are useful for addressing privacy concerns, since they don't attempt to identify individuals. For example, face analysis algorithms can accurately discern a person's gender. This capability is employed by electronic billboards that display varying messages depending on whether a man or woman is looking at them, as well as by services that deliver dynamically updated reports on meeting-spot demographics. Face analysis techniques can also make a pretty good guess as to someone's age bracket. Intel and Kraft harnessed this capability last year in developing a line of vending machines that dispense free pudding samples only to adults. More recently, the Chinese manufacturing subcontractor Pegatron used it to screen job applicants, flagging those who may be less than 15 years old, so it can avoid hiring underage workers. The mainstream press tends to latch on to any imperfection as a broad-brush dismissal of a particular technology. As engineers, we know how oversimplistic such an approach is. While RD and product developers continue to pursue the holy grail of 100% accurate face recognition, other face analysis techniques are sufficiently mature to support numerous compelling uses. How will you leverage them in your next-generation system designs? Visit the Embedded Vision Alliance website for plenty of application ideas, along with implementation details and supplier connections. Jeff Bier is the founder of the Embedded Vision Alliance.
  • 热度 23
    2014-1-30 18:57
    2054 次阅读|
    0 个评论
    I've invited my colleague Brian Dipert to share his perspective on various face analysis algorithms used in embedded vision. As editor-in-chief of the Embedded Vision Alliance, he regularly discovers and reports on interesting embedded vision applications, some of which he discusses here. Face recognition—the technology that enables cameras (and the computers behind them) to identify people automatically, rapidly, and accurately—has become a popular topic in movies and television. Consider the 2002 blockbuster Minority Report . If you've seen it (and if you haven't, you definitely should), you might recall the scene where Tom Cruise's character, Chief John Anderton, is traversing a shopping mall. After scanning his face, sales kiosks greet him by name and solicit him with various promotions. Lest you think this is just a futuristic depiction, the British supermarket chain Tesco is now making it a reality. Plenty of other real-life face recognition implementations exist. Consider Facebook's tag suggestions, an automated system that identifies Friends' faces each time you upload a photo (a facility likely enhanced by the company's 2012 acquisition of Face.com), or Apple's iPhoto software, which automatically clusters pictures containing the same person. Don't forget the face recognition-based unlock option supported in the last few Android releases (likely enabled by Google's 2011 acquisition of Pittsburgh Pattern Recognition) and available on iOS via third-party applications. And the new Microsoft Xbox One and Sony PlayStation 4 game consoles support face recognition-based user login and interface customisation via their camera accessories (included with the Xbox One, optional with the PS4). Face recognition has made substantial progress in recent years, but it's admittedly not yet perfect. Some of its limitations are due to an insufficiently robust database of images. And some of its limitations are a result of algorithms not yet able to compensate fully for things like off-centre viewing angles, poor lighting, or subjects who are wearing hats or sunglasses or sport new facial hair or makeup. Ironically, face recognition's inability to identify people with guaranteed reliability provides privacy advocates with solace. However, other face analysis technologies are arguably more mature, enabling a host of amazing applications, and they are useful for addressing privacy concerns, since they don't attempt to identify individuals. For example, face analysis algorithms can accurately discern a person's gender. This capability is employed by electronic billboards that display varying messages depending on whether a man or woman is looking at them, as well as by services that deliver dynamically updated reports on meeting-spot demographics. Face analysis techniques can also make a pretty good guess as to someone's age bracket. Intel and Kraft harnessed this capability last year in developing a line of vending machines that dispense free pudding samples only to adults. More recently, the Chinese manufacturing subcontractor Pegatron used it to screen job applicants, flagging those who may be less than 15 years old, so it can avoid hiring underage workers. The mainstream press tends to latch on to any imperfection as a broad-brush dismissal of a particular technology. As engineers, we know how oversimplistic such an approach is. While RD and product developers continue to pursue the holy grail of 100% accurate face recognition, other face analysis techniques are sufficiently mature to support numerous compelling uses. How will you leverage them in your next-generation system designs? Visit the Embedded Vision Alliance website for plenty of application ideas, along with implementation details and supplier connections. Jeff Bier is the founder of the Embedded Vision Alliance.  
  • 热度 15
    2013-10-23 21:46
    2996 次阅读|
    0 个评论
    I am now having a delightful, exuberant, and all-around great time. This is because I've started work on my Arduino-powered robot, which means my head is currently bubbling over with ideas and brimming with questions. In my previous blog on this topic , I started off by looking at traditional 2-wheel and 4-wheel bases. Eventually, however, I decided to go with a 3-wheel base from RobotShop.com as shown below. Why? Well, quite apart from anything else, it looks almost as cool as Doctor Who in a bow tie and a fez!     Robot base in its usual orientation (top) and flipped over so we can better see the motors (bottom). Observe the special "transwheels" attached to the motors. These boast free-turning rollers mounted perpendicular to the axle arranged around the periphery of the wheel. Based on its use of transwheels, the exciting thing about my omni-directional robot base is that it can move in any direction. Of course, the tricky thing about controlling this omni-directional base is that it can move in any direction (LOL). This is actually a tremendously interesting topic, which we will consider in more detail below, but first... So, with my base on order, the next thing I need is a controller to handle my three DC motors. Of course, I could use my main Arduino board to directly control an H-Bridge, but I want to keep the Arduino's computing resources as free as possible. The solution is to use a dedicated motor controller board. Each of my Cytron SPG30-20K motors has a rated current of 410mA. After looking around the Internet, it seemed to me that the most affordable and best supported option was the Arduino Motor-Stepper-Servo Shield from Adafruit.com:   This little beauty—which connects to the main Arduino via an I2C interface, thereby consuming only two pins—can control up to four DC motors (which is what I need) or two stepper motors. It can handle 1.2A per channel and offers a 3A peak current capability, so it's more than capable of dealing with the motors on my base. The cool thing is that all my main Arduino has to do is send a command to the motor controller saying "Make motor 'n' run in 'x' direction at 'y' speed," after which we can leave the motor controller to perform its magic. Now, this is where things start to get interesting... Purely for the sake of discussion, let's assume we have a birds-eye view of a 2-wheel base. Let's further assume that this base is currently pointing "north," but we wish it to travel some distance "east." We could illustrate this as shown below (these images omit the one or two castors used to balance the base):   As we see, first have to rotate the robot until it's pointing in the direction we wish to go (we'll call this action "rotation"), after which we can move it as far as we wish (we'll call this action "translation"). Now consider the 3-wheel base we introduced on the previous page. By varying the amount of rotation on the three wheels, we can simply make this base go in any direction we please. On the other hand, we may well decide that our 3-wheel base has a "front" (indicated by the arrows in the images below). One reason for this might be that we have a particular sensor—like the Pixy machine vision sensor—that is mounted with a specific orientation. Now, working with a 3-wheel base, let's once again assume that our base starts off pointing "north," but we wish it to travel some distance "east." Of course, we could do the same thing we did with our 2-wheel base, which is to first perform the rotation and to then perform the translation—that is, to keep these actions completely separate. Alternatively, by carefully controlling and varying the speed of each of our three wheels, we can perform the rotation and the translation simultaneously as illustrated below:   As we see, there are two ways in which we could approach this. The first is to arrange things such that the rotation is times to end with the translation. The second, which I think I prefer, is to make the rotation happen—and get the base pointing in the right direction—as quickly as possible and to then continue with the translation. Of course, when I say "as quickly as possible," this needs to be qualified a little. If the base rotates or accelerates too quickly, all sorts of unfortunate things might occur. What we want to do is to ramp up to the full rotational and translational speeds at the beginning and then ramp down again at the end. This topic, along with more esoteric concepts like Genetic Algorithms, will be the subject of my next blog. Until then, do you have any questions or comments?
  • 热度 17
    2013-10-23 21:34
    1755 次阅读|
    0 个评论
    I am currently having a delightful, exuberant, and all-around wonderful time. This is because I've started work on my Arduino-powered robot, which means my head is currently bubbling over with ideas and brimming with questions. In my previous blog on this topic , I started off by looking at traditional 2-wheel and 4-wheel bases. Eventually, however, I decided to go with a 3-wheel base from RobotShop.com as shown below. Why? Well, quite apart from anything else, it looks almost as cool as Doctor Who in a bow tie and a fez!     Robot base in its usual orientation (top) and flipped over so we can better see the motors (bottom). Observe the special "transwheels" attached to the motors. These boast free-turning rollers mounted perpendicular to the axle arranged around the periphery of the wheel. Based on its use of transwheels, the exciting thing about my omni-directional robot base is that it can move in any direction. Of course, the tricky thing about controlling this omni-directional base is that it can move in any direction (LOL). This is actually a tremendously interesting topic, which we will consider in more detail below, but first... So, with my base on order, the next thing I need is a controller to handle my three DC motors. Of course, I could use my main Arduino board to directly control an H-Bridge, but I want to keep the Arduino's computing resources as free as possible. The solution is to use a dedicated motor controller board. Each of my Cytron SPG30-20K motors has a rated current of 410mA. After looking around the Internet, it seemed to me that the most affordable and best supported option was the Arduino Motor-Stepper-Servo Shield from Adafruit.com:   This little beauty—which connects to the main Arduino via an I2C interface, thereby consuming only two pins—can control up to four DC motors (which is what I need) or two stepper motors. It can handle 1.2A per channel and offers a 3A peak current capability, so it's more than capable of dealing with the motors on my base. The cool thing is that all my main Arduino has to do is send a command to the motor controller saying "Make motor 'n' run in 'x' direction at 'y' speed," after which we can leave the motor controller to perform its magic. Now, this is where things start to get interesting... Purely for the sake of discussion, let's assume we have a birds-eye view of a 2-wheel base. Let's further assume that this base is currently pointing "north," but we wish it to travel some distance "east." We could illustrate this as shown below (these images omit the one or two castors used to balance the base):   As we see, first have to rotate the robot until it's pointing in the direction we wish to go (we'll call this action "rotation"), after which we can move it as far as we wish (we'll call this action "translation"). Now consider the 3-wheel base we introduced on the previous page. By varying the amount of rotation on the three wheels, we can simply make this base go in any direction we please. On the other hand, we may well decide that our 3-wheel base has a "front" (indicated by the arrows in the images below). One reason for this might be that we have a particular sensor—like the Pixy machine vision sensor—that is mounted with a specific orientation. Now, working with a 3-wheel base, let's once again assume that our base starts off pointing "north," but we wish it to travel some distance "east." Of course, we could do the same thing we did with our 2-wheel base, which is to first perform the rotation and to then perform the translation—that is, to keep these actions completely separate. Alternatively, by carefully controlling and varying the speed of each of our three wheels, we can perform the rotation and the translation simultaneously as illustrated below:   As we see, there are two ways in which we could approach this. The first is to arrange things such that the rotation is times to end with the translation. The second, which I think I prefer, is to make the rotation happen—and get the base pointing in the right direction—as quickly as possible and to then continue with the translation. Of course, when I say "as quickly as possible," this needs to be qualified a little. If the base rotates or accelerates too quickly, all sorts of unfortunate things might occur. What we want to do is to ramp up to the full rotational and translational speeds at the beginning and then ramp down again at the end. This topic, along with more esoteric concepts like Genetic Algorithms, will be the subject of my next blog. Until then, do you have any questions or comments?  
相关资源