Getting Started in Automotive Smart Vision Design
Advances in embedded vision technology have heightened interest in applying smart vision solutions for automotive safety. Despite past difficulties in overcoming the complexity of embedded vision design, designers today can take advantage of image-processing software, vision processors and specialized development systems to get started in exploring automotive smart vision applications.
Embedded vision technology offers tremendous potential for enhancing automotive safety by simplifying the driving process. Unlike the simple rear-vision cameras commonly available today, automotive embedded vision provides the foundation for automatic driver assistance systems (ADAS) intended to relieve the cognitive load on drivers and enhance overall safety. Providing 360° coverage, automotive vision design seeks to surround the vehicle with vision systems able to recognize and track potential hazards -- helping drivers through lane-departure warning, collision avoidance, pedestrian detection and more.
Until recently, the concept of automated, vision-based hazard identification and tracking remain largely limited to research efforts or highly funded mil/aero projects. The emergence of readily available software and hardware solutions for embedded vision has dramatically lowered the barrier to entry -- and fueled a rapid growth in interest in leveraging embedded vision technology in automotive safety.
The maturation of the open-source OpenCV library has enabled designers to deploy highly sophisticated computer vision algorithms without requiring specialized knowledge in image-processing theory. Written in optimized C/C++, the library supports C++, C, Python and Java interfaces for Windows, Linux, Mac OS, iOS and Android platforms and features libraries that span a broad range of computer vision functionality.
For computer vision, object identification is an essential requirement and OpenCV provides developers with a broad range of object classification routines. For example, the object detection library implements some high-level object classification models. For custom object-detection approaches, the machine learning library supports more fundamental classifier methods including Bayes, Support Vector Machines, Decision Trees and Neural Network classifiers, among others.
A particularly compelling feature of OpenCV is its support for associated libraries designed to exploit specific hardware features of advanced processors. For example, the OpenCV library accelerates OpenCV functionality on OpenCL-compatible hardware including standalone processors, graphics processors, core processors and FPGAs. The module uses lower-level OpenCL runtime routines built to take advantage of the device's specialized hardware features provided by each hardware manufacturer. The emerging OpenVX library extends this approach by not only serving to accelerate packages such as OpenCV, but also providing direct support for applications including ADAS.
Beyond these open-source offerings, designers can also find fully supported commercial software packages such as the MathWorks Computer Vision System Toolbox ™. The MathWorks suite provides an extensive array of vision-processing algorithms as MATLAB® functions, System objects™, and Simulink® blocks with automatic C-code generation.
For exploring the design of computer vision systems, designers can turn to open-source hardware such as Arduino. Yet, while offering a platform for evaluating basic vision applications, general purpose hardware will typically lack the performance required for implementing advanced computer vision applications and deploying capabilities such as real-time object identification.
Indeed, computer vision algorithms present significant computational challenges in both quantity of data and complexity of computation. In the past, commercial developers could only dream of real-time object detection for lack of cost-effective hardware solutions able to execute sophisticated algorithms on streaming data sets.
Today, silicon manufacturers have targeted computer vision applications with specialized processors built with dedicated on-chip image processing capabilities. The availability of specialized image-processing hardware such as the Analog Devices BF609 Blackfin® processor and Xilinx Zynq® 7000 All Programmable SoC stands as a key enabler of the emergence of widely available automotive embedded vision solutions.
The BF609 Blackfin processor combines a pair of Blackfin DSP cores with Analog's specialized PVP (pipelined vision processor) to accelerate image processing. Capable of processing up to four data streams simultaneously, the PVP can be dynamically reconfigured to form different pipeline structures required for different vision processing algorithms. At the same time, the BF609 integrates memory, system and watchpoint protection features needed in functional safety applications such as automotive vision. Analog Devices provides its Eclipse-based CrossCore® Embedded Studio for software design and debug.
The Xilinx Zynq 7000 All Programmable SoC supports the computational demands of vision processing through a combination of ARM Cortex™-A9 cores, NEON DSP/FPU engines and an FPGA fabric. While the cores support application execution, the FPGA fabric enables designers to implement custom high-speed image processing pipelines. For software design and debug, the Xilinx Vivado™ suite offers a comprehensive development environment tuned to the Zynq device.
Vision development kit
The combination of specialized computer vision libraries and dedicated image processors offers the fundamental elements required to build sophisticated automotive embedded vision applications. Yet, combining these separate pieces into a complete automotive-vision development platform can be a daunting task in itself for developers more interested in creating vision solutions than dealing with development platform integration.
Getting started with automotive embedded vision has become dramatically simpler thanks to the emergence of complete automotive-vision development platforms such as the Avnet Blackfin Embedded Vision Starter Kit. The centerpiece of the kit is the FinBoard development board featuring a full complement of peripherals required to build a complete computer vision system. Based on the Analog Devices BF609 processor, the FinBoard includes a 720p CMOS color sensor, HBLEDs for illumination, HDMI output, 128 MB RAM, 32 Mb Quad SPI flash and microSD card cage.
Fig. 1: The Avnet Blackfin Embedded Vision Starter Kit builds on the FinBoard, a complete computer-vision system based on the Analog Devices BF609 image processor.(Source: FinBoard)
Along with the FinBoard development board and Analog's CrossCore development suite, the Avnet kit includes the Analog Devices ICE-100B JTAG debugger. Combined with the Blackfin emulator, the ICE-100B enables developers to communicate with the Blackfin processor on a host PC. Using the debugger, engineers can operate in single-step or at full speed with predefined breakpoints as well as alter register and memory contents as needed.
Beyond its comprehensive hardware and software components, the Avnet kit finds extensive support from the community-based FinBoard.org site. Along with online documentation, training resources and community forums, FinBoard.org offers registered users access to reference designs able to kick-start new projects. For example, a reference design for implementing video pass through provides a step-by-step introduction on using the CrossCore suite to implement a basic video application on the FinBoard. Similar reference designs introduce new users to the FinBoard with a basic power-on self test application or show more experienced FinBoard users how to implement a Canny Edge detector.
In combination, powerful image-processing software and specialized vision processors offer compelling solutions for creating sophisticated automotive embedded vision applications. With the availability of comprehensive starter kits, getting started in this fast-growing segment has never been easier.