How All Programmable Revolutionizes Embedded Vision

Display portlet menu

How All Programmable Revolutionizes Embedded Vision

Display portlet menu

How all programmable technology revolutionizes embedded vision

Person using a self driving car

Autonomous driving is just the start of EV solutions

In 1982, Knight Rider brought us KITT, an artificially intelligent car that fought crime through high tech features like embedded vision. More than 35 years later, auto makers’ aspirations are racing past driver only or assisted automation like monitoring blind spots or adaptive cruise control. They have their eyes on the prize: fully autonomous driving.

In fact, IHS Automotive predicts autonomous car sales will hit 21 million by 2035.

Embedded vision (EV) replicates the human ability to take in visual information and process it in order to make decisions. Except EV does it with cameras, cables and CPUs, allowing machines like cars to absorb information and make decisions as well.

That creates a host of design challenges:

  • High performance demands: Enabling an embedded vision system to perform analytics in real time is a complex task. The higher the resolution and frame rates of the image, the more computation power required to process the data and extract meaningful information from it. The increasing challenge on designers, is that it must be done at a faster rate and a lower amount of power than ever before. The advent of machine learning algorithms will only exacerbate these demands.
  • Complex programming environment. Building a design that is differentiated and responsive while also able to immediately adapt to the latest algorithms and image sensors creates exponential complexity and stress. You’re left with large decisions such as what tools and emerging techniques can help you build a design that supports quality.
  • Shortened design cycles. Systems must be highly differentiated, extremely responsive and able to immediately adapt to the latest algorithms and image sensors. They must also hit the market faster than their competitors. With shortened design cycles, designers are having to choose between creating next-generation architectures and getting their IP to market on deadline.

Let’s take our autonomous driving example. This EV application, which promises to simplify a common task for people globally, is deeply complex system with multiple interactions between all of its parts:

  • Sensing: processing raw frame-by-frame data via in-vehicle sensors
  • Perception: taking data to do object detection, classification and positioning
  • Mapping: identifying safe driving areas and objects within a mapped area
  • Localizing: pairing information with the vehicle’s high-accuracy GPS
  • Route/path planning: determining short and long-term driving paths—including incident reaction
  • Motion planning: navigating vehicle control strategies appropriate for selected routes
  • Vehicle control: issuing braking, throttling, steering and suspension commands while driving
  • Driver interaction: providing feedback to the driver, sensing driver intent and handing off control

This used to be quite a challenge, considering whether a tiny sports car or a large truck, truly autonomous driving needs a network of cameras on all corners of the vehicle.

But All-Programmable SOCs have brought more clarity to this complex process.

Previous generation ADAS systems required an external processor to implement the algorithms for image processing and analytics. Such ASSP-based architectures required proprietary interface protocols and were more challenging to customize for feature differentiation.

With the advent of All Programmable MPSoCs, software bottlenecks can be accelerated in high-performance programmable logic while retaining the reconfigurability required for rapid upgrade. Designers may choose a software-defined development flow within a familiar, eclipse-based environment using C and C++ languages and leverage hardware-optimized image processing libraries such as OpenCV for an optimal partition of embedded vision algorithms between software and hardware.

As the auto industry transitions from ADAS to autonomous driving, ever greater concentrations of sensor fusion will combine visible-light cameras, radars and LIDAR systems distributed across the vehicle, connected over high-speed serial links. Combining multiple sensor interfaces, analytics and vehicle control into one system helps designers create lower power, higher efficiency data paths that enable self-driving cars to prevent a break-in before the first window pane is shattered or stop a self-driving car dead in its tracks to avoid an accident with an impending obstacle.

But it does more than just simplify design. It also solves problems for end customers.

Right now, most EV in cars reaches level 0 or level 1 in the autonomous driving spectrum, enabling those in the passenger seat with blind-spot monitoring or lane-keeping assistance. Fully driverless cars (level 5) is the hardest version of EV implementation to pull off. But considering 80% of accidents are a result of distracted driving, according to the NHTSA, self-driving cars are also the key to safer roads for us all. And these high efficiency, low power solutions make EV accessible at accessible price points.

We’re driving toward new innovations in embedded vision – and the future of driving is only the beginning.

How All Programmable Revolutionizes Embedded Vision

Display portlet menu

How All Programmable Revolutionizes Embedded Vision

Display portlet menu
Related Articles
Robotic hand touching laptop keyboard
Xilinx SoCs and the reVISION Stack Accelerate Embedded Vision Integration
May 4, 2018
SoCs with programmable logic are an essential element of real-time embedded vision systems. Designers can capitalize on the power and efficiency of Xilinx's Zynq Ultrascale+ MPSoC devices to implement their designs using Avnet's Embedded Vision Kits
Graphic of a green car flying in the air
Autonomous vehicles are the future
April 9, 2018
Self-driving cars, unmanned aircraft or driverless tractors—autonomous vehicles stopped being merely an idea on paper a long time ago.
white drone flying at sunset
Is the machine vision market set to undergo explosive growth?
December 14, 2017
The global machine vision market is still in a phase of steady, rapid growth. According to forecasts released by BBC Research, the global machine vision market was about US$16 billion in 2018, and is slated to grow at a compound annual growth rate of
Young man looking at his cell phone
Seriously: What you can see is far more than what you see!
August 28, 2017
In addition to our own eyes, there is a growing number of "mechanical eyes" that help us observe, document, and analyze our surroundings in the world today.
conceptual image of digital eye and and automated factory
What kind of “eyes” does smart manufacturing really need?
August 18, 2017
Market requirements have propelled the industrial vision market to rapid growth in recent years. According to estimations by Shenzhen Gaogong Industry Research Institute (GGII), the machine vision market, driven by China’s industrial sector, is exp
Graphic of an eye with the words Embedded Vision at the bottom
Embedded vision is everywhere! See for yourself
March 3, 2017
Embedded vision technology will soon touch nearly every aspect of our daily lives –- from automatic vacuum cleaners in our homes to augmented-reality applications.

How All Programmable Revolutionizes Embedded Vision

Display portlet menu
Related Events

No related Events found