how-difficult-vehicle-mounted-vision-processing-is

Display portlet menu

how-difficult-vehicle-mounted-vision-processing-is

Display portlet menu

Unreliable autonomous driving? Take a look at how difficult vehicle-mounted vision processing is

road view from inside self-driving smart car

We all have high expectations for autonomous driving, a fact that’s exemplified by the display of autonomous driving technologies at every major tech expo. Development in this sector has however been accompanied by an increase in accidents related to autonomous driving including fatal accidents involving Tesla and Uber.

To enable autonomous driving, we first have to build a reliable Advanced Driver Assistance System (ADAS), which is in itself a daunting task. On one hand, ADAS vision processing must respond to increasingly complicated applications and environments while ensuring reliable performance in low light or severe weather conditions; on the other hand, to improve the identification accuracy of the vision system and provide it with capacity for autonomous learning, machine learning, neural networks and other AI algorithms must be incorporated.

These requirements necessarily increase the complexity and load of visual processing tasks and consume more computing resources and time. However, this is in direct conflict with the attributes of the embedded environment of vehicle-mounted applications, an environment that has limited resources and is extremely hardware-demanding. This is the predicament developers of vehicle-mounted applications face on a daily basis.

vehicle-mounted vision processing
Figure 1 Typical process for vehicle-mounted vision processing

To achieve breakthrough, let us take a look at the typical process for vehicle-mounted vision processing. The process includes four steps:

Step 1

Pre-processing: Pre-processing includes frame technology, color adjustment, white balance, contrast balancing, and image rectification. Not only does processing these graphics output primitives require massive amounts of data, each primitive is moreover independent from the others and have low levels of dependency, and therefore require high bandwidth as well as parallel data processing capacity.

Step 2

Feature extraction: Feature extraction involves the extraction of features from images on the basis of pre-processing, particularly key edges and corners.

Step 3

Target identification: The objects in the images—people, vehicles, traffic signals, etc.—are identified based on the output of data for distinguishing features and require the utilizing of machine learning and neural network algorithms.

Step 4

Target tracking: Each frame in the aforementioned images is recorded and multiple frames are accumulated for determining targets and realizing stable identification and judgment.

vehicle-mounted vision processing

The three first steps are usually considered bottom- and mid-level processing while parallel processing is relatively more advanced. Step 4 involves logical relationships that require sequential execution and continuous processing. We can therefore conclude that each task in vehicle-mounted vision processing has different requirements and that it is difficult for a hardware platform with a single framework to satisfy all requirements. That’s why we need to build a more complex and comprehensive heterogeneous system architecture that enables different hardware resources to respond to different calculation and processing tasks.

The S32V vehicle-mounted vision processor produced by NXP Semiconductors, for example, is equipped with specific computing units corresponding to the different steps within vehicle-mounted vision processing.

how difficult vehicle-mounted vision processing is
Figure 2 Diagram of the NXP Semiconductors S32V vehicle-mounted vision processor (source: NXP)

To pre-process graphics output primitives for feature extraction, S32V utilizes a programmable image sensor processor (ISP) to accelerate stream processing. The programmable design also provides bottom-level processing with the flexibility to respond to the pre-processing requirements in different applications.

AI algorithms needed to perform tasks from feature extraction to target identification require vision acceleration. For this purpose, S32V incorporates two dedicated APEX-2 coprocessors to facilitate high-speed parallel single instruction multiple data (SIMD) accelerated computing.

High-level processing tasks from target identification to target tracking involve serial computing. S32V completes these tasks using the multiple-core (quad-core maximum) ARM Cortex-A53 processor with a clock speed of 1GHz. Its processing system also has an integrated Cortex-M4 core with frequency up to 133 MHz to implement control functions and real-time tasks.

how difficult vehicle-mounted vision processing is

Additional functions such as 3D GPU, hardware security encryption, storage, and peripheral interface are integrated into S32V to form a comprehensive automotive-grade embedded security vision processing platform.

Appropriate hardware is however only the first step toward a comprehensive solution or product. Software collaboration is also required. The key lies in achieving optimal integration of hardware and software in embedded visual applications with limited functions and high sensitivity to power consumption. This means that software tasks must be allocated to the most suitable hardware unit for processing to fully make use of and unleash hardware capacity.

how difficult vehicle-mounted vision processing is

To achieve this goal, we need to first analyze and classify typical computing models for applications, identify tasks that require or potentially require acceleration, and assign suitable hardware to perform the acceleration. In practice, there are several methods we can try to accelerate parallel computing:

  • Data parallelism: Assign data that require parallel processing to units with parallel processing capacity such as dedicated coprocessors like APEX-2. Dedicated processors are always faster than general processors.
  • Pipeline parallelism: Organize and assign various computing units so all units can operate at full capacity at the same time, ensuring that no units are idle.
  • Task parallelism: Assign different vision processing tasks to be performed concurrently.

Vehicle-mounted vision processing speed and comprehensive performance can only be greatly enhanced after the aforementioned comprehensive optimization and close integration of software and hardware.

how difficult vehicle-mounted vision processing is
Figure 3 Process of vehicle-mounted vision processing and corresponding S32V hardware resources (source: NXP)

Vehicle-mounted vision is a challenging sector for embedded vision processing and requires closer synergy and comprehensive integration of different resources. Hardware developers must fully understand the requirements of the target application in order to formulate high-performance hardware acceleration structures. Software developers must also make full use of hardware and allocate resources reasonably to maximize hardware performance. This may be difficult, but it’s a necessary step for us to achieve fully autonomous driving.

how-difficult-vehicle-mounted-vision-processing-is

Display portlet menu

how-difficult-vehicle-mounted-vision-processing-is

Display portlet menu
Related Articles
Smart car branded with Avnet and supplier logos
ADAS: paving the way for autonomous vehicles
November 11, 2019
In order to realize fully automated driving without human intervention, three conditions must be met - the vehicle must be fully aware of the surrounding environment, be able to respond accordingly when the environment changes, and the security of th
test car colliding with object
Transportation Safety: 5 Protocols & Processes to Know
May 29, 2019
A dynamic range of protocols can help make our transportation technology safer.
cargo ship at port
Transportation Trends in Commercial & Non-Passenger Vehicles
May 29, 2019
Some of the most exciting technology trends shaping transportation are for commercial vehicles.
Graphic of 5G uses for automotive
5G is here – and maybe it’s time to buy a new car!
May 27, 2019
The mission of 5G is to create the ‘Internet of Everything’, drastically expanding the scale of the Internet in the process. As one of our most valuable possessions, it is inevitable that cars will join the network.
Interior of European self-driving car
Meeting Next-Generation Automotive Design Challenges
April 2, 2018
Thanks to engineering ingenuity and the electronic components that have become pervasive inside vehicles of all types, cars are smarter and, in many ways, safer to drive. As the automotive industry advances toward Level 5 fully autonomous vehicles, a
smartphone showing alarm feature
Key Design Considerations for Selecting the Right RF Antenna
March 16, 2018
Know when to choose standard, when to go custom.
Person using the navigation systemm on their car dashboard
Four tech trends that improve the driver experience
September 13, 2017
Learn about ways to integrate some the most innovative features into cars, including haptic-feedback touch screens, knob replacement, smart glass and driver-notification applications.
smart car on the road at sunset with IoT icons overlay
Mastering the road ahead
August 28, 2017
The market demand for Advanced Driver Assistant Systems (ADAS) has increased by leaps and bounds. In fact, the global automotive electronics market is forecast to grow robustly over the next few years. In particular, the market value of entertainment
side view of a sports car from the ground up
Powering the future—the hidden heroes of electric vehicle technologies
August 14, 2017
The automobile industry in Mainland China has developed at an unimaginably rapid pace in recent years and a general trend can be observed in car sales figures. A little more than twenty years ago, automobile production in Mainland China numbered only
man in dirverless car with hands off the wheel
The Growth and Increasing Sophistication of ADAS in the World
July 5, 2017
As electronics in our cars continue to become more sophisticated, ADAS (Advanced Driver Assistance Systems) are taking center stage. These systems make many of the aspects of driving easier, and---most importantly—safer.
self-driving car at intersection
Top 5 Tech Trends in Advanced Driver Assistance Systems
June 15, 2017
Advanced driver assistance systems (ADAS) promise to enhance vehicle safety by helping to simplify the driving process, reducing sources of driver distraction and inattention that often lead to accidents. With ADAS support, drivers and their passenge
rear view camera on car dash with image of child riding a toy
Top 5 Myths in Automotive Vision: Designing Embedded Vision Systems Is Easier Than You Think
June 11, 2017
Vision has always occupied a special place in information science and popular culture. One does not need to be an engineer to appreciate the vast bandwidth available in normal human vision. Most people understand that the common saying “a picture i
man drawing schetch of an electric car attached to a battery
Technologies and Components for Designing Electric Vehicles
April 29, 2017
Hybrid electric vehicles (HEVs) such as the Toyota Prius and the Chevy Volt and electric vehicles (EVs) such as the Nissan Leaf, BMW i3 and Tesla Model S are growing in popularity amid concern for global warming.
Man navigating an IoT car dashboard
The Internet of Things is Driving The Internet of Autos
April 24, 2017
In the early 1900s Henry Ford made the automobile affordable and accessible. But what he really did was offer people connection. Rural residents could connect with more urban areas to sell crops and buy supplies.
smart car dashboard
New TFT LCD Technology Shape Infotainment for Cars of the Future
April 15, 2017
Although the technology has been around since the 1990s, the use of thin-film-transistor liquid-crystal displays (TFT LCDs) is on the rise in automobiles.
person driving a semi
Getting Started in Automotive Smart Vision Design
March 27, 2017
Advances in embedded vision technology have heightened interest in applying smart vision solutions for automotive safety.
Graphic of a car's outline
Gesture Recognition, Proximity Sensors Drive Advances in Automotive Infotainment
March 17, 2017
Safety must remain paramount when designing interactive interfaces for automotive applications, including guarding against distracted driving.
Person using the navigation console in their car
Automotive Electronics: Top 5 Tech Trends of Tomorrow’s Smart Cars
March 3, 2017
In the United States alone, motor vehicles travel well over four trillion miles each year according to the U.S. Department of Transportation.*
WiFi router with man in the background using tablet
How wireless communication protocols make or break designs
March 3, 2017
The design process includes many critical decisions for engineers, such as which wireless communications protocol to use.

how-difficult-vehicle-mounted-vision-processing-is

Display portlet menu
Related Events

No related Events found