1. What will Synopsys release?
Synopsys will be releasing the DesignWare Embedded Vision (EV) processor family. The EV52 and EV54 vision processors are fully programmable and configurable vision processor IP cores that combine the flexibility of software solutions with the low cost and low power consumption of dedicated hardware. The EV processors implement a convolutional neural network (CNN) capable of operating at over 1000 GOPS/W, enabling fast and accurate detection of a diverse range of targets, such as faces, pedestrians, and gestures, at a fraction of the power consumption of other vision solutions.
2. What is embedded vision?
Embedded vision refers to integrating computer vision into a System-on-a-Chip (SoC). Embedded vision is the ability of an SoC device to recognize targets and gestures in video frames and to respond appropriately when an object or gesture is detected.
3. What are the key supports for implementing embedded vision?
Computer vision has existed in laboratories for 60 years. But it wasn't until the last few years that microprocessors gained sufficient power to embed computer vision into SoCs (System-on-a-Chip). Another factor contributing to the advancements in embedded vision is the reduction in power consumption—thanks only to advanced process nodes and more highly optimized processor architectures such as ARC (Automatic Processing). Higher processor performance coupled with lower power consumption (better performance efficiency) allows designers to integrate embedded vision into an ever-expanding range of designs, including portable applications.
4. What are the target applications of the DesignWareEV embedded vision processor?
The DesignWare EV52 and EV54 processors were designed to meet the growing market demand in video surveillance, security, gesture recognition, and object detection, tracking, and classification. End applications include cameras, wearable devices, home automation, digital televisions, virtual reality, gaming devices, robotics, digital displays, medical electronics, and automotive infotainment systems.
5. Why does visual processing require a special processor?
While vision algorithms can run on most processors, these algorithms involve many complex mathematical operations and data transfers. General-purpose processors (GPPs) can be used for vision processing, but they lack the resources for complex mathematical operations, resulting in very slow operation. In some vision applications, graphics processing units (GPUs) have abundant computing resources but lack the ability to efficiently move visual data, resulting in relatively low visual performance and very high power consumption. Vision processors are designed specifically for vision processing, therefore they possess the necessary complex mathematical operations and sophisticated data transfer capabilities to efficiently process visual frame data. Furthermore, to be usable in embedded vision applications, they must meet low power consumption requirements. The DesignWareEV processor's high computing power, excellent visual data transfer performance, and very low power consumption make it an excellent choice for implementing vision processing in SoCs.
6. How does the DesignWare embedded vision processor family differ from other EV solutions?
Existing programmable vision processors on the market are large and power-hungry, especially when using general-purpose graphics processing units (GPGPUs). Meanwhile, there are hardwired solutions that offer excellent performance and power efficiency, but they are inflexible and unprogrammable, limiting their application to very limited areas. The DesignWareEV processor family delivers best-in-class performance in both areas in a single product, making it not only programmable but also including a high-performance object detection engine. This allows users to fully program the DesignWareEV embedded vision processor for their specific applications and benefit from hardware acceleration when needed. For object and gesture recognition applications, the DesignWareEV processor offers superior performance at up to 5 times lower power consumption than other vision solutions.
7. What benefits can customers gain from the DesignWare embedded vision processor family?
High-performance, high-precision target monitoring
• Low power consumption: 5 times more efficient than existing vision solutions
Its flexible programmable features can meet the needs of a variety of existing and emerging embedded vision applications.
• A high-productivity programming environment based on industry standards OpenCV and OpenVX
The DesignWareEV embedded vision processor offers programmability to meet the diverse needs of various embedded vision applications and provides seamless hardware acceleration for vision-leading convolutional neural network (CNN) algorithms. DesignWareEV can also be programmed on demand to support other embedded vision algorithms. Therefore, the DesignWareEV processor offers excellent performance and flexibility, while boasting one of the lowest area and power consumption levels among available solutions. The DesignWareEV embedded vision processor is supported by a complete suite of tools, including MetaWare, the OpenCV library, and the OpenVX runtime and kernel, thereby reducing programming workload.
8. What is CNN?
Convolutional Neural Networks (CNNs) mimic how our brains process vision. They break down images into parts and progressively find the target they are trained to recognize. CNNs have been around for over 20 years, but only recently have these algorithms seen substantial improvements, and they currently outperform other available algorithms and even human experts in object recognition. A CNN is a deep learning algorithm that is trained much like our brains are trained using multiple images of a target. It uses these images to summarize a single image that the algorithm can use to find the target in a picture or video.
Recent announcements from Nvidia, CEVA, Microsoft, and other companies have highlighted the migration of embedded vision to CNNs. In fact, Microsoft and Google have recently adopted CNNs in high-end applications, achieving accuracy exceeding 95%, surpassing even human experts. CNNs are currently the best vision algorithm for obtaining high-quality and high-accuracy results, outperforming other algorithms such as Viola-Jones, HOG, SIFT, and SURF.
9. What are the differences between the EV52 and EV54?
The EV52 features a dual-core ARC processor (28nm) operating at up to 1GHz, while the EV54 boasts a higher-performance quad-core ARC processor implementation. Both feature a programmable target detection engine that is configurable and delivers fast, accurate target detection at up to 5x lower power consumption than competing solutions. The target detection engine runs a CNN executable and consists of 2, 4, or 8 processing units (PEs). The number of PEs is configured by the user at build time, as is the streaming interconnect network between them. This interconnect network features flexible point-to-point connections between all PEs, dynamically changing based on the CNN graph executing on the target detection engine.
10. What types of targets can the DesignWare embedded vision processor monitor?
The DesignWareEV processor can detect any type of target, including landscapes and terrain. The target detection engine is trainable, but initially, a kernel pre-optimized for face detection, speed limit sign detection, and face tracking is provided.
11. What is the programming environment for an embedded vision processor solution?
The DesignWareEV embedded vision processor is programmed in C/C++ using the MetaWare toolkit, and it provides support for the widely used open-source vision programming software tools OpenCV and OpenVX.
OpenCV (an open-source computer vision library) is a software library containing 2,500 functions that can be used with MetaWare, providing a software architecture for embedded vision applications. OpenCV can be used to monitor and identify targets, as well as a complete range of machine vision functions.
OpenVX is an open-source standard for accelerating embedded vision algorithms. The DesignWareEV embedded vision processor is powered by the OpenVX framework and 43 main kernels. Kernels for face detection, speed sign detection, and face tracking are already available for the DesignWareEV embedded vision processor. Users can also create their own proprietary kernels for the DesignWareEV processor.
OpenCV and OpenVX complement each other and can be used simultaneously in vision applications.
12. How is the EV processor integrated into the SoC?
The DesignWare EV embedded vision processor comes with the ARCHitect tool upon delivery and configuration, enabling rapid kernel implementation and output of synthesizable RTL (Register Transfer Level). The DesignWare EV embedded vision processor aims to support host processors, enabling it to work with all host processors, including ARM, Intel, Imagination MIPS, and PowerPC. The EV processor offers numerous features that facilitate control and offloading by the host processor, including memory space visibility to the host and the ability to synchronize their operations via signaling. The EV processor interfaces with the rest of the SoC via a connection to the AXI bus.
The video frame memory can be directly connected to the DesignWareEV embedded vision processor, or the processor can access it via the AXI bus. The DesignWareEV embedded vision processor can be programmed to operate autonomously independent of the host processor, or users can choose to perform the same amount of control and function sharing between the EV processor and the host processor. To accelerate software development, a virtual prototyping model for the EV processor is available, as well as an FPGA-based HAPS® prototyping solution, enabling hardware-software co-design before chip fabrication.
13. Where can I get more information about the family of embedded vision processors, including pricing?
Following the product launch on March 30, information about the EV processor will be available on the Synopsys website at http://www.synopsys.com/dw/ipdir.php?ds=arc-ev52-ev54 .
Synopsys' policy is not to disclose pricing. Due to the unique business model needs of our IP clients, we will provide customized quotes for each client.
For more information, please follow the Embedded Systems channel.