tag 标签: CEVA

相关博文
  • 热度 24
    2015-3-20 17:28
    1477 次阅读|
    1 个评论
    I truly believe we are on the brink of a major evolution in embedded systems technology involving embedded vision and embedded speech capabilities.   One example I often use to describe what I'm talking about is that of a humble electric toaster in the kitchen. Actually, that reminds me of a rather amusing story that really happened to me last year...   Some time ago, I gave my dear 84-year-old mother an iPad. Initially she was a tad trepidacious, but she quickly took to it like a duck to water. On one of my subsequent visits to the UK to visit with her, I gave her an Amazon give card and told her she could use it to order something with her iPad.   "What shall I buy?" she asked. "What do you want?" I replied. Well, it turned out that she was interested in having a matching electric kettle and electric toaster, so I showed her how to search for them on Amazon. She had a great time rooting around all the various offerings and eventually selecting and ordering the toaster-kettle-combo of her dreams.   After she'd placed her order, I asked: "What would grandma {my grandma; her mother} have thought about all of this -- seeing you sitting there ordering an electric kettle and electric toaster over the Internet?"   Now, you have to remember that my mother didn’t even get electricity into her house until 1943 when she was 15 years old -- prior to that they had a coal fire for heating and gas mantles lighting their little terraced home.   My mother's response really made me think when she said: "Your grandmother wouldn’t have understood anything about the iPad or the wireless network or the Internet -- what would have really gotten her excited would have been the thought of an electric toaster and an electric kettle!"   It makes you think, doesn’t it? But we digress...   Returning to my example of a humble electric toaster in the kitchen, let's suppose the little scamp were equipped with embedded vision and embedded speech capabilities, and let's then envision the following scenario. It starts when I walk up to the toaster and insert two slices of wheat bread. If this is the first time I've used it, the toaster might enquire "Good morning, it's my pleasure to serve you, can you please tell me your name?" To which I might reply: "You can call me Max the Magnificent," or something of that ilk. The toaster then asks: "How would you like this to be prepared? And I might reply: "Reasonably well done, if you please."   When the toast subsequently emerges, the toaster might say: "How's that for you?" And I might reply: "Pretty good, but perhaps just a tad darker next time."   Sometime later, my son, Joseph, meanders into the kitchen, drops two slices of bread into the toaster, and an equivalent dialog takes place; similarly for my wife (Gina the Gorgeous).   It may be that the following day we each wish to toast some bagels, or perhaps some English muffins, and -- over time -- the toaster will become acquainted with our varying preferences for each item.   The whole point of this is that, in the future, the toaster can use its embedded vision to determine (a) who is doing the toasting and (b) the type of food being toasted. Based on this information, it can give each user the toasting experience of their dreams.   Furthermore, should the occasion arise that my wife is making me breakfast in bed, for example (hey, it doesn’t hurt to dream), then -- as opposed to giving me something inedible as is her wont -- she could leave the tricky part up to the toaster and say something like: "Can you make this just the way Max likes it?"   I'm sorry... I got carried away dreaming of breakfast in bed in general, and one I actually wanted to eat in particular. O, what a frabjous day that would be! But, once again, we digress...   The reason I'm waffling on about this here is that the folks at CEVA have just announced the availability of their new CEVA-XM4 imaging and vision processor IP that will enable real-time 3D depth map and point cloud generation, deep learning and neural network algorithms for object recognition and context awareness, and computational photography for image enhancement, including zoom, image stabilization, noise reduction. and improved low-light capabilities.     This fourth-generation imaging and vision IP is equipped with the functionality required to solve the most critical challenges associated with implementing energy-efficient human-like vision and visual perception capabilities in embedded systems.     The CEVA-XM4 boasts a programmable wide-vector architecture, with fixed- and floating-point processing, multiple simultaneous scalar units, and a vision-oriented low-power instruction set, resulting in tremendous performance coupled with extreme energy efficiency.   According to CEVA: The new IP’s capabilities allow it to support real-time 3D depth map generation and point cloud processing for 3D scanning. In addition, it can analyze scene information using the most processing-intensive object detection and recognition algorithms, ranging from ORB, Haar, and LBP, all the way to deep learning algorithms that use neural network technologies such as convolutional neural networks (CNN). The architecture also features a number of unique mechanisms, such as parallel random memory access and a patented two-dimension data processing scheme. These enable 4096-bit processing -- in a single cycle -- while keeping the memory bandwidth under 512bits for optimum energy efficiency. In comparison to today’s most advanced GPU cluster, a single CEVA-XM4 core will complete a typical ‘object detection and tracking’ use-case scenario while consuming approximately 10% of the power and requiring approximately 5% of the die area.   Of course, having household appliances like toasters equipped with embedded vision and embedded speech capabilities might end up being a bit of a "two-edged sword," as it were. Did you ever see the British science fiction TV comedy series Red Dwarf ? There was a classic 'Does Anyone Want Any Toast?' scene that will live in my memory for ever.     How about you? What do you think about the prospects of embedded systems that can spot you walking by and bring you up to date with what the washing machine said to the tumble dryer? Are we poised to enter an exciting new world... or a recluse's nightmare?
  • 热度 24
    2015-3-20 17:25
    1290 次阅读|
    0 个评论
    I truly believe we are on the brink of a major evolution in embedded systems technology involving embedded vision and embedded speech capabilities.   One example I often use to describe what I'm talking about is that of a humble electric toaster in the kitchen. Actually, that reminds me of a rather amusing story that really happened to me last year...   Some time ago, I gave my dear 84-year-old mother an iPad. Initially she was a tad trepidacious, but she quickly took to it like a duck to water. On one of my subsequent visits to the UK to visit with her, I gave her an Amazon give card and told her she could use it to order something with her iPad.   "What shall I buy?" she asked. "What do you want?" I replied. Well, it turned out that she was interested in having a matching electric kettle and electric toaster, so I showed her how to search for them on Amazon. She had a great time rooting around all the various offerings and eventually selecting and ordering the toaster-kettle-combo of her dreams.   After she'd placed her order, I asked: "What would grandma {my grandma; her mother} have thought about all of this -- seeing you sitting there ordering an electric kettle and electric toaster over the Internet?"   Now, you have to remember that my mother didn’t even get electricity into her house until 1943 when she was 15 years old -- prior to that they had a coal fire for heating and gas mantles lighting their little terraced home.   My mother's response really made me think when she said: "Your grandmother wouldn’t have understood anything about the iPad or the wireless network or the Internet -- what would have really gotten her excited would have been the thought of an electric toaster and an electric kettle!"   It makes you think, doesn’t it? But we digress...   Returning to my example of a humble electric toaster in the kitchen, let's suppose the little scamp were equipped with embedded vision and embedded speech capabilities, and let's then envision the following scenario. It starts when I walk up to the toaster and insert two slices of wheat bread. If this is the first time I've used it, the toaster might enquire "Good morning, it's my pleasure to serve you, can you please tell me your name?" To which I might reply: "You can call me Max the Magnificent," or something of that ilk. The toaster then asks: "How would you like this to be prepared? And I might reply: "Reasonably well done, if you please."   When the toast subsequently emerges, the toaster might say: "How's that for you?" And I might reply: "Pretty good, but perhaps just a tad darker next time."   Sometime later, my son, Joseph, meanders into the kitchen, drops two slices of bread into the toaster, and an equivalent dialog takes place; similarly for my wife (Gina the Gorgeous).   It may be that the following day we each wish to toast some bagels, or perhaps some English muffins, and -- over time -- the toaster will become acquainted with our varying preferences for each item.   The whole point of this is that, in the future, the toaster can use its embedded vision to determine (a) who is doing the toasting and (b) the type of food being toasted. Based on this information, it can give each user the toasting experience of their dreams.   Furthermore, should the occasion arise that my wife is making me breakfast in bed, for example (hey, it doesn’t hurt to dream), then -- as opposed to giving me something inedible as is her wont -- she could leave the tricky part up to the toaster and say something like: "Can you make this just the way Max likes it?"   I'm sorry... I got carried away dreaming of breakfast in bed in general, and one I actually wanted to eat in particular. O, what a frabjous day that would be! But, once again, we digress...   The reason I'm waffling on about this here is that the folks at CEVA have just announced the availability of their new CEVA-XM4 imaging and vision processor IP that will enable real-time 3D depth map and point cloud generation, deep learning and neural network algorithms for object recognition and context awareness, and computational photography for image enhancement, including zoom, image stabilization, noise reduction. and improved low-light capabilities.     This fourth-generation imaging and vision IP is equipped with the functionality required to solve the most critical challenges associated with implementing energy-efficient human-like vision and visual perception capabilities in embedded systems.     The CEVA-XM4 boasts a programmable wide-vector architecture, with fixed- and floating-point processing, multiple simultaneous scalar units, and a vision-oriented low-power instruction set, resulting in tremendous performance coupled with extreme energy efficiency.   According to CEVA: The new IP’s capabilities allow it to support real-time 3D depth map generation and point cloud processing for 3D scanning. In addition, it can analyze scene information using the most processing-intensive object detection and recognition algorithms, ranging from ORB, Haar, and LBP, all the way to deep learning algorithms that use neural network technologies such as convolutional neural networks (CNN). The architecture also features a number of unique mechanisms, such as parallel random memory access and a patented two-dimension data processing scheme. These enable 4096-bit processing -- in a single cycle -- while keeping the memory bandwidth under 512bits for optimum energy efficiency. In comparison to today’s most advanced GPU cluster, a single CEVA-XM4 core will complete a typical ‘object detection and tracking’ use-case scenario while consuming approximately 10% of the power and requiring approximately 5% of the die area.   Of course, having household appliances like toasters equipped with embedded vision and embedded speech capabilities might end up being a bit of a "two-edged sword," as it were. Did you ever see the British science fiction TV comedy series Red Dwarf ? There was a classic 'Does Anyone Want Any Toast?' scene that will live in my memory for ever.     How about you? What do you think about the prospects of embedded systems that can spot you walking by and bring you up to date with what the washing machine said to the tumble dryer? Are we poised to enter an exciting new world... or a recluse's nightmare?
  • 热度 23
    2013-5-8 15:17
    2200 次阅读|
    3 个评论
    整理2011年3月和8月的两篇日记,形成此文,随意分享,说得不对,多多包涵。    多core多线程,矢量处理器内核,这些似乎是目前SDR的通信用处理器的一条殊途同归之路。   放眼看去,ceva,Tensilca,cognovo以及Sandbridge的号称用于4G的SDR的处理器,都有这样的基本特征。(由于涉及知识产权问题,此处不好将他们的系统架构图,内核架构图贴出来了。且以上公司大多被收购了,品牌不知道还能否保留住。但咱们还是尊重他们的知识产权,不在此透露太多技术细节。)   当然这些都是可以看到。还有那些不能轻易看到细节的,比如说高通的。   其实这样的一个结构的趋势,主要还是基于对4G的基础系统的分析,即对OFDM系统的基础处理流程的分析。OFDM系统本身就是一个可以有很高并行度的处理的系统,才把SDR的内核的处理器架构引向了这个方向。     公司 系统架构 tool comment cognovo HW: MCE (ARM,sequencer,dual VSP,turbo,system RAM,HARQ RAM,RFIF) SW: SDM OS, PHY kernel library kernel SDK, system SDK 128MAC per cycle each VSP Tensilica HW: multi Connx DPUs (BBE16,SSP16,BSP3,Turbo16,PIF) custimized ISA and user defined interface, HW easy integration SW: kernel library TIE language, XPRES compiler, processor and software developer toolkit 16*3(BBE16)=48MAC per cycle; CEVA HW: XC321 single core SW: LTE software kit optimized c compiler, IDE 32MAC per cycle Sandbridge HW:SB3500(3 SBX node,ARM9,HAB bridge,pheripherals,digRF)   48MAC per cycle                                   终究还是有了多年的通信系统信号处理的工程背景,和在芯片设计公司沾染芯片架构设计的一些浅薄认知。感觉到这个SDR的处理器的设计,指令集的设计,算法的编写的曲折与趣味。   先来看看都是些什么人在做这些事情。   最终芯片的内核必然还是那些个做内核设计的人来完成的,肯定都是一些绝顶聪明之人。   而通信系统的分析必然还需要那些个做通信信号处理的人来完成,也必不是等闲之辈。   而这两个领域不能完全说风马牛不相及,但毕竟还是隔行如隔山,水深水浅,只有行内人才知道。理解对方领域的东西,互相取挑剔,去磨合,去思考,再回来思考自己的行内的内容,进一步循环再循环,最后演进出咱们看到每一代的SDR的处理器。   而真的能够做到尽善尽美,需要付出多少的代价。就比如说TI这样的大公司,多少各个方面的专家和大师呢?而TI的DSP芯片的份额却还是在日渐萎缩的。而自称是多线程处理器架构的鼻祖的SB3500的这个处理器。其中FFT的运算,居然没有考虑每一级运算之间的scale。估计他们的通信专家们都是和做内核,做指令集以及做算法库的人交流也并不很畅通吧。也许根本没有通信信号处理的专家,而只是自认为已经充分掌握通信系统需求的一帮人自娱自乐罢了。   DSP这样的有着特殊处理器结构的内核的东西,确实是跟着系统分析走的,有着很强烈的系统适用性的特征。而不同的人对系统的理解都不一样,导致的各个DSP的细节设计上出入很大。于是要想编译器能自动生成很高效的代码,在我看来也真的是很搞笑的。所以要想在不同的DSP之间做到代码的一个很好的可移植性,其实也是相当难的。诸如说intrinsic这样的东西,已经很大程度上表达了编译器的大师们尽量减少用户使用时候,平台移植工作量的问题了。但是还是有太多的细节要求DSP软件工程师永远要纠结在每个DSP的细节里面。不过这何尝不是DSP设计者与DSP软件工程师间交流的一种方式呢?   说到intrinsic,就再多废话几句。   我要说intrinsic是个骗子,compiler guys肯定会跳起来,把我骂个狗血淋头。呵呵。   intrinsic就是长得比较好看一点,对于DSP的程序来说,整个看起来还是遵循着C的规则。而真正的一颗DSP,例如有强大的vector engine的DSP处理器,期待就用标准的C代码,compiler能自动变成并行的指令,这只是一个传说。所以,DSP工程师必须学会用intrinsic,其实就是把汇编用看起来跟C一个规则的外表包起来的东东。要真正写好DSP的处理算法,不了解DSP的内核有多少种多少个register是不可能;不掌握DSP的指令集是不可能的;不掌握每个intrinsic也是不可能的。期待着compiler帮助你做着所有的事情,只是天方夜谭。   intrinsic也有他的好处。就是他毕竟将汇编指令封装了一层,所以对于指令集的升级维护等工作会比较方便一点。但是对于指令集升级,真正的工作源于它为什么要升级---哈,对了,肯定是DSP内核发生了改变,例如从8MAC升级到了16MAC甚至64MAC甚至128MAC;而遇到这样的case,你原来的算法的代码肯定期待提高并行度,所以你还是必须要用新的intrinsic。哈,所以问题还是又转回来了。所有DSP工程师们就是在这里挣扎了,也在这里享受乐趣。不再写汇编代码,写intrinsic,用汇编的思路和汇编的手法。  
  • 热度 16
    2011-7-9 14:11
    1328 次阅读|
    0 个评论
相关资源