Xilinx SoCs and the reVISION Stack Accelerate Embedded Vision Integration

Display portlet menu

Xilinx SoCs and the reVISION Stack Accelerate Embedded Vision Integration

Robotic hand touching laptop keyboard

SoCs with programmable logic are an essential element of real-time embedded vision systems. Designers can capitalize on the power and efficiency of Xilinx's Zynq Ultrascale+ MPSoC devices to implement their designs using Avnet's Embedded Vision Kits and the Xilinx reVISION stack. This ecosystem enables a straightforward approach to integrating deep learning AI-based vision features and facilitates rapid development by eliminating software development dependency upon the first prototype hardware cycle.

Introducing computer vision to embedded design can be a complex process. Invariably the hardware must be small, lightweight, low power, and low cost. Fast product development cycles make known good solutions for the underlying functionality essential. Every minute spent on the lowest levels of the firmware is another minute not designing functionality that differentiates the product. Xilinx provides a solution by aggregating a fully integrated solution that engineers can modify and build upon. Software engineers can get started on complex machine-learning-based image processing designs without writing a single line of hardware HDL code by using the SDSoC Development Environment, a Zynq Ultrascale+ MPSoC development kit, and one of the many complete design examples available.

Rapid development of embedded vision products requires use of an existing hardware platform with sufficient interfaces and onboard functionality to meet product requirements. The platform must also provide an easy-to-use, robust firmware and application development environment. For this, Xilinx collaborates with Avnet.

Avnet's experience with years of vision-oriented development cycles has culminated in a complete system approach including the Avnet Embedded Vision Kit with multiple SoC-based SoM options, video specific carrier cards, and features like PoE. Designers can utilize Xilinx's development ecosystem to exploit programmable SoC device families like the  Zynq Ultrascale MPSoC, enabling focus upon fine-tuning and customizing intellectual property rather than porting code.

Today’s embedded vision products require single device solutions that are powerful enough to meet real time task deadlines and critical mission safety specifications while staying within challenging power efficiency budgets. Video and image processing typically requires sophisticated features like object detection and recognition, algorithmic decision-making, and motion path selection. The outputs of these processes must be deterministically bound to control decisions, status analysis, and human machine interface notification. Without such determinism, safety and reliability are directly impacted. Devices like the Zynq Ultrascale+ MPSoC feature four ARM Cortex-A53 CPUs that enable symmetric multiprocessing implementation upon image processing and application-rich operating systems like Linux.

The Zynq Ultrascale+ MPSoC further integrates essential functionality critical to embedded vision products with two ARM Cortex R5 real-time processors operating independently of the quad core and operating system environment. This enables the implementation of lock step monitoring and safety features that can continue operation in the event of a serious software system failure. A separate fault-tolerant platform management unit enables safety and power management functions while a configuration and security unit provides easy configuration and security threat protection. Finally, a Mali-400 graphics processor provides built-in 2D and 3D rendering, allowing the platform to provide for high-quality video display output. The Zynq Ultrascale+ MPSoC are not simply FPGAs anymore. Xilinx has recognized a software-centric approach better meets the expectations of the embedded vision marketplace. FPGA design strategy has changed from proprietary hardware solutions requiring considerable investment in hardware HDL implementations to achieve real-time performance. Systems are now software solutions that require hardware acceleration increasingly provided by tried and tested off-the-shelf IP. Hardware acceleration integrates into software applications by future proofing frameworks like OpenCL. The reVISION stack is Xilinx's way of putting this all together in a complete system environment.

Embedded vision systems need guided machine learning and computer vision acceleration. Xilinx All Programmable technology utilizes the software defined embedded vision 'reVISION stack' to realize machine learning, sensor fusion, and computer vision. The reVISION stack encompasses a software-defined environment inside an industry standard framework to enable implementation of most of the popular neural networks used today, including AlexNet, GoogLeNet, SqueezeNet, SSD and FCN. Optimized reference models for these neural networks are available. The reVISION stack also includes all the functional capability and blocks required to build completely custom neural networks. Neural networks are typically layers of convolutional (filter) and non-linear (activation) processes that may interpolate (up sample) or decimate (down sample) information from the previous layer. The reVISION stack accommodates for most interface layering methods with hardware-optimized implementations of Conv, ReLU, Pooling, Dilated conv, Deconv, FC, Detector & Classifier, and SoftMax.

Designers can choose from an array of image processing IP that seamlessly integrates with neural network capability under frameworks like Caffe and OpenVx. The result is high responsiveness and configurability with access to the resources of a wide development community of people continually adding and updating OpenCV libraries. With Xilinx OpenCV (xfopenCV) the most critical acceleration functions oriented toward applications like drone control, autonomous driving and machine learning are immediately available.

Software developers can incorporate hardware accelerators like filters, image processing and motion tracking with a few lines of well-documented code. Input data can easily be streamed in and out of these instantiations as simple objects being referenced like function parameters. Direct streaming is a powerful way to optimize the use of memory in a system. By using streaming in this way, the compiler can directly link acceleration modules using internal bus structures with minimum memory overheads and avoid external memory access. This reduces power consumption and lead to significant improvements in processing latency.

Need a platform to get started with? Try Avnet’s www.zedboard.org online resource that contains related examples, information, and training using a number of ready-made Zync SoC module (SOM) based kits. Designers can also utilize the reVISION stack in combination with development platforms like the Zynq Ultrascale+ MPSoC ZCU102 Evaluation Kit using FMC and USB interfaced cameras, HDMI sources and virtual video devices to both train and implement applications. The neural network-based system is easily customized through software running on the ARM processor system without the need of a time-consuming compilation process. Many design examples incorporating both machine learning and vision are available to learn from, including motion detection, face tracking, thermal imaging and robotics applications.

Multi-camera vision applications are becoming increasingly common. This is especially the case in Advanced Driver Assistance Systems (ADAS) where a platform must be capable of meeting the processing power required for the fast frame rates, high performance signal processing, sophisticated sensor fusion, and dedicated neural network hardware acceleration. However, the problem goes beyond this into relatively simple criteria that can be frequently overlooked. It does not matter how powerful a device is if it does not meet the approval standards required for automotive applications. Xilinx’s automotive qualified XA Zynq Ultrascale MPSoC family meets AEC-Q100 test specifications. This enables device use in harsh automotive environments that require higher temperature grades, high visibility change management, and high reliability manufacturing. Above the physical and environmental specifications, these devices incorporate a 'safety island' that enables real-time processing to be implemented in mission critical safety applications like ADAS, allowing device certification that meets the ISO 26262 ASIL-C standard.

As the previous example showed, it is important to factor in all the requirements essential to an embedded vision system before making the choice of platform. The programmable logic available on the Zynq Ultrascale+ MPSoC devices enables solutions in systems where a CPU-only based solutions would be impractical and even dangerous. An example of this is in industrial robotic motor control applications. These typically require high speed PID loops that base error calculation on real-world feedback requiring high-speed sampling of analog signals. The programmable logic fabric available on the Zynq Ultrascale+ MPSoC devices works well in this role, reducing the needs for rapid interrupt task-based software drivers that can reduce system stability and degrade performance. Even if the control algorithm is simple, the real-time determinism required to maintain low jitter and the sample rates lead to rapid task switching, causing significant processing power wasted in the task switching alone.

Safety critical embedded vision products like those in industrial robotics control require failsafe operation. The Zynq Ultrascale+ MPSoC integrated system monitor includes a multi-channel analog to digital converter (ADC) along with on-chip sensors that monitor on-die operating conditions such as temperature and supply voltages. This enables fault conditions to be detected independently from the software domain, with status available through external communication ports such as an I2C interface and alarm outputs. The Zynq Ultrascale+ MPSoC has an additional high-speed monitor capable of up to 1MSPS sampling, enabling extremely rapid response to fault conditions. Upon fault detection, the robotic control system can park itself into a safe state of operation protecting both equipment and user.

There now exists a fully integrated embedded vision development system that utilizes a software-centric approach. Xilinx Zynq Ultrascale MPSoCs are devices made easy to use due to the comprehensive reVISION stack and flexible, vision-oriented hardware development kits. An MPSoC has a clear advantage over embedded CPUs due to configurable programmable logic hardware acceleration. Xilinx has added functionality to support reconfiguration, reliability, monitoring and safety, eliminating the need to bolt on additional supervisory hardware. Existing examples enable designers with limited knowledge of FPGA logic design to get started. The use of OpenVx, Caffe, OpenCL, and OpenCV standards, along with an operating system like Linux, opens up any system development to a large pool of third party IP to accelerate development and future-proof applications.

Implementing advanced vision features are possible with the Zynq Ultrascale+ MPSoC and reVISION. Solutions with Xilinx and Avnet can help cut through all the pain and suffering of complex system design and bring clarity projects: whether it’s an autonomous car, a medical imaging device, or the next-generation coffee stirring, dishwashing robotic super drone. To discover more about how you can realize embedded vision solutions and read about other innovative successes including robotics and autonomous driving, please visit www.avnet.com/xilinxev.

Related Articles
interior of autonomous car
The State of Automotive Only Starts with Autonomous Driving
March 6, 2020
Learn how the state of automotive spans a variety of applications, from electrication to in-cabin AI.
Graphic of a green car flying in the air
Autonomous vehicles are the future
April 9, 2018
Autonomous vehicles do not just replace the driver, helmsman or pilot, but have the potential to create completely new business models worth billions.
Two men and woman reviewing technology products.
3 ways All Programmable SoCs create new opportunities for designers
March 2, 2018
In the growing market of embedded vision, there’s a need to not only rapidly scale to compete but to also be ready for machine learning’s effect on the space. That’s all while keeping up with lightning fast design cycles.
futuristic automobile interior
How all programmable technology revolutionizes embedded vision
December 14, 2017
Autonomous driving is just the start of EV solutions. Learn how all-programmable is revolutionizing embedded vision.
blue eye image with green computer pupil
Embedded vision is everywhere! See for yourself
February 20, 2017
Embedded vision technology will soon touch nearly every aspect of our daily lives –- from automatic vacuum cleaners in our homes to augmented-reality applications in our mobile phones and even in the self-driving cars of the future.
Related Events
speedway logo
MiniZed Technical Training Courses
Date: January 1, 2020
Location: Multiple Dates / Locations