热度 13
2015-10-17 20:44
1259 次阅读|
0 个评论
Get ready -- embedded vision is getting here faster than you might expect. All that is required is the economics to make it affordable and the computing power to make it feasible. It looks like all of the pieces are now in play, and we can expect to see an explosion of embedded vision-enabled systems appearing on the scene as early as 2016. Can you remember when the first cell phone equipped with a digital camera appeared on the market circa 2000? At that time, many people expressed the opinion that they had no use for such a beast -- all they wanted to do with their cell phone was to make phone calls. Today, the same naysayers can’t imagine life without camera-equipped smartphones. Originally, smartphones -- and, later, tablet computers -- had only one camera on the back to take pictures of things other than the user. It wasn't long before we moved to have one camera on the back and another on the front, where the one on the front is used to take "selfies" and for video chatting. Now, some smartphones and tablets have three cameras -- one on the back to take pictures of other things, and two on the front to augment traditional capabilities with stereoscopic processing for things like gesture recognition. Other systems keep one camera on the front, but have two on the back. Why? Well, suppose you are in a furniture store, for example. You could take a picture of a piece of furniture using your smartphone, which -- using the stereoscopic capabilities provided by its dual cameras -- could then automatically determine the size of the furniture. Later, when you return home, you could use a special app to see how that object would look (and fit) in your home. Trust me. It won’t be long before even a humble smartphone boasts at least four cameras -- two in the back and two in the front -- that it uses to perform tasks like face detection (and recognition in the future), people detection, gesture detection, object detection (and recognition in the future), motion detection, and... the list goes on. The thing is that it's hard enough to do this sort of thing with one camera. The amount of computationally-intensive processing required to perform even relatively rudimentary machine vision tasks staggers the imagination, so implementing things like stereoscopic gesture detection and object detection with four-plus cameras boggles the mind. The trick is to offload the main application processor and to process the video streams in real-time using a specialized vision processor. All of this explains why Cadence Design Systems has just announced the Tensilica Vision P5 digital signal processor (DSP). According to the press release: This imaging and vision DSP core offers up to 13X performance boost, with an average of 5X less energy usage on vision tasks compared to the previous generation IVP-EP imaging and video DSP. The Tensilica Vision P5 DSP is built from the ground up for applications requiring ultra-high memory and operation parallelism to support complex vision processing at high resolution and high frame rates. As such, it is ideal for off-loading vision and imaging functions from the main CPU to increase throughput and reduce power. End-user applications that can benefit from the DSP’s capabilities include image and video enhancement, stereo and 3D imaging, depth map processing, robotic vision, face detection and authentication, augmented reality, object tracking, object avoidance and advanced noise reduction. The Tensilica Vision P5 DSP core includes a significantly expanded and optimized Instruction Set Architecture (ISA) targeting mobile, automotive advanced driver assistance systems (or ADAS, which includes pedestrian detection, traffic sign recognition, lane tracking, adaptive cruise control, and accident avoidance) and Internet of Things (IoT) vision systems. The advances in the Tensilica Vision P5 DSP further improve the ease of software development and porting, with comprehensive support for integer, fixed-point and floating-point data types and an advanced toolchain with a proven, auto-vectorizing C compiler. The software environment also features complete support of standard OpenCV and OpenVX libraries for fast, high-level migration of existing imaging/vision applications with over 800 library functions. I think we are poised to experience some very interesting times. What do you think about all of this? In the meantime, click here for more information on the Tensilica Vision P5 DSP core.