原创 Creating visionary mobile and embedded designs

2013-2-8 19:38 1739 16 16 分类: 消费电子

In the latter part of 1990s—just about the time the IPv6 upgrade of the Internet Protocol was made available—a set of software tools algorithms for creating real time embedded vision applications called OpenCV was developed by Intel Corp. It has then been donated to the open source community.


Both are now broadly available and changing the way we use computers. But each has traveled different paths to their now increasingly wide acceptance among both the developer community and the broader device user public.


IPv6 was introduced to deal with the rapidly dwindling number of URLs available with the previous IPv4. But it was resisted and took ten years to come into common use because of the reluctance of most organisations to invest in the infrastructure needed. IPv4 URLs are now all used up and there is no choice but to make the shift.


OpenCV, on the other hand, was quickly adopted amongst a small coterie of developers in particular embedded market segments, such as factory automation and military/aerospace where machine vision was critically important.


And now, about ten years later, the pace of its acceptance has rapidly accelerated as a growing number of companies – and developers—see it as the tool set of choice for making embedded computing platforms more user friendly, not only in mobile devices but in the many new embedded consumer apps in home automation, home networking, lighting, smart TVs, power grid metering and smart appliances.


The charter of this "embedded vision" alliance—spearheaded by companies such as AMD, Analog Devices, BDTI, CEVA, Freescale, Intel, Invidia, Mathworks, National Instruments, Synopsys, Tensilica and Texas Instruments, among others—is to move beyond the current touch-based interfaces.


Where such MEMS and capacitive sensor-based interfaces are simpler than previous mouse and GUI-based PC interfaces, they still require that the user learn how to operate the computing system. Taking a completely different approach, the aim in such vision-based designs is to create software mechanisms by which the user does not have to learn how to use the computing device.


Instead the strategy is to build the software infrastructure that will make it possible for computers to understand us by means of vision algorithms that recognise and interpret correctly many common—and innate—human gestures, facial changes, eye movements and other natural cues.


In this rapidly evolving segment of embedded systems development, new resources are constantly become available. As I come across them I will do what I can to make you aware of them. And if you come across resources in this area that you think are useful, let me know.


Also, let me know about your experiences in the form of blogs or design and development articles you might wish to share with the embedded developer community.


 

文章评论0条评论)

登录后参与讨论
我要评论
0
16
关闭 站长推荐上一条 /2 下一条