热度 24
2015-3-20 17:28
1477 次阅读|
1 个评论
I truly believe we are on the brink of a major evolution in embedded systems technology involving embedded vision and embedded speech capabilities. One example I often use to describe what I'm talking about is that of a humble electric toaster in the kitchen. Actually, that reminds me of a rather amusing story that really happened to me last year... Some time ago, I gave my dear 84-year-old mother an iPad. Initially she was a tad trepidacious, but she quickly took to it like a duck to water. On one of my subsequent visits to the UK to visit with her, I gave her an Amazon give card and told her she could use it to order something with her iPad. "What shall I buy?" she asked. "What do you want?" I replied. Well, it turned out that she was interested in having a matching electric kettle and electric toaster, so I showed her how to search for them on Amazon. She had a great time rooting around all the various offerings and eventually selecting and ordering the toaster-kettle-combo of her dreams. After she'd placed her order, I asked: "What would grandma {my grandma; her mother} have thought about all of this -- seeing you sitting there ordering an electric kettle and electric toaster over the Internet?" Now, you have to remember that my mother didn’t even get electricity into her house until 1943 when she was 15 years old -- prior to that they had a coal fire for heating and gas mantles lighting their little terraced home. My mother's response really made me think when she said: "Your grandmother wouldn’t have understood anything about the iPad or the wireless network or the Internet -- what would have really gotten her excited would have been the thought of an electric toaster and an electric kettle!" It makes you think, doesn’t it? But we digress... Returning to my example of a humble electric toaster in the kitchen, let's suppose the little scamp were equipped with embedded vision and embedded speech capabilities, and let's then envision the following scenario. It starts when I walk up to the toaster and insert two slices of wheat bread. If this is the first time I've used it, the toaster might enquire "Good morning, it's my pleasure to serve you, can you please tell me your name?" To which I might reply: "You can call me Max the Magnificent," or something of that ilk. The toaster then asks: "How would you like this to be prepared? And I might reply: "Reasonably well done, if you please." When the toast subsequently emerges, the toaster might say: "How's that for you?" And I might reply: "Pretty good, but perhaps just a tad darker next time." Sometime later, my son, Joseph, meanders into the kitchen, drops two slices of bread into the toaster, and an equivalent dialog takes place; similarly for my wife (Gina the Gorgeous). It may be that the following day we each wish to toast some bagels, or perhaps some English muffins, and -- over time -- the toaster will become acquainted with our varying preferences for each item. The whole point of this is that, in the future, the toaster can use its embedded vision to determine (a) who is doing the toasting and (b) the type of food being toasted. Based on this information, it can give each user the toasting experience of their dreams. Furthermore, should the occasion arise that my wife is making me breakfast in bed, for example (hey, it doesn’t hurt to dream), then -- as opposed to giving me something inedible as is her wont -- she could leave the tricky part up to the toaster and say something like: "Can you make this just the way Max likes it?" I'm sorry... I got carried away dreaming of breakfast in bed in general, and one I actually wanted to eat in particular. O, what a frabjous day that would be! But, once again, we digress... The reason I'm waffling on about this here is that the folks at CEVA have just announced the availability of their new CEVA-XM4 imaging and vision processor IP that will enable real-time 3D depth map and point cloud generation, deep learning and neural network algorithms for object recognition and context awareness, and computational photography for image enhancement, including zoom, image stabilization, noise reduction. and improved low-light capabilities. This fourth-generation imaging and vision IP is equipped with the functionality required to solve the most critical challenges associated with implementing energy-efficient human-like vision and visual perception capabilities in embedded systems. The CEVA-XM4 boasts a programmable wide-vector architecture, with fixed- and floating-point processing, multiple simultaneous scalar units, and a vision-oriented low-power instruction set, resulting in tremendous performance coupled with extreme energy efficiency. According to CEVA: The new IP’s capabilities allow it to support real-time 3D depth map generation and point cloud processing for 3D scanning. In addition, it can analyze scene information using the most processing-intensive object detection and recognition algorithms, ranging from ORB, Haar, and LBP, all the way to deep learning algorithms that use neural network technologies such as convolutional neural networks (CNN). The architecture also features a number of unique mechanisms, such as parallel random memory access and a patented two-dimension data processing scheme. These enable 4096-bit processing -- in a single cycle -- while keeping the memory bandwidth under 512bits for optimum energy efficiency. In comparison to today’s most advanced GPU cluster, a single CEVA-XM4 core will complete a typical ‘object detection and tracking’ use-case scenario while consuming approximately 10% of the power and requiring approximately 5% of the die area. Of course, having household appliances like toasters equipped with embedded vision and embedded speech capabilities might end up being a bit of a "two-edged sword," as it were. Did you ever see the British science fiction TV comedy series Red Dwarf ? There was a classic 'Does Anyone Want Any Toast?' scene that will live in my memory for ever. How about you? What do you think about the prospects of embedded systems that can spot you walking by and bring you up to date with what the washing machine said to the tumble dryer? Are we poised to enter an exciting new world... or a recluse's nightmare?