Moving ADAS/AD from R&D labs to production environment

Author: Kaivan Karimi

Date: November 20, 2018

Last month, I visited Japan and Korea and met with the two largest Tier-1s in Asia. Usually, for my ADAS presentations, I talk about the status of the market, our autonomous car prototype vehicle, and our Autonomous Vehicle Innovation Center (AVIC) based in Ottawa, Canada. Then I tell our potential customers that although our innovation center and demonstration vehicle is the public view of our collaboration with our partners, we are working with our customers to deliver ADAS production systems today. “How?”, they typically ask. That’s when I get into the bits and bytes of what we provide, with a key emphasis on scalability, safety, security, performance and low power.

The two customer meetings were remarkably similar in many ways. Those discussions crystallized in my mind what the ADAS market really needs at this juncture more than anything else, and what our value proposition really meant to our customers. First, let’s discuss some of the specific challenges the ADAS teams are facing today, and how these translate to software challenges.

ECU Consolidation For ADAS

We have discussed in the past that automakers are looking to reduce costs, weight and power consumption by combining multiple ECUs to a few consolidated systems we call “mega-ECUs” or “domain controllers”. In the context of ADAS and automated driving, feature-consolidation is taking place and many Level 1 and 2 systems such as lane assist, traffic jam assist and park assist are being combined onto a single automated driving controller. Therefore, the alphabet soup of ADAS is changing.

decorative image

 

As this consolidation occurs and the number of ECUs drop, the complexity of the software on each mega-ECU increases. At the same time OEMs continue to evolve their ADAS features, thereby introducing additional complexity into the system on an on-going basis.

decorative image

 

The new challenge is to get all this software to run concurrently on a common SoC without latency or interference with each other (i.e. functional isolation). This is a key to functional safety. As technology evolves further, the automated driving controller becomes the robot-brain, which will be responsible for a wide range of tasks – everything from sensing and processing sensor data to making safe driving decisions and acting on them.

Breadth of technologies needed for ADAS

No single company has the means, expertise, or time to develop all the technologies needed to build a full ADAS system by themselves. A large ecosystem of partners and suppliers is needed to contribute at various levels. The software that implements ADAS functions will be built, bought and integrated – hence, it’s a large software integration challenge. This is different from previous generations of ECU systems, where the vast majority of the software was developed by a single company for a single ECU. While some elements such as the RTOS and low-level middleware were outsourced, that wasn’t a significant portion of the software.

The second challenge to consider is the embedded environment. Combining this software on an embedded processor requires a high level of optimization for the coprocessors and hardware IP that our semiconductor partners have integrated in their chips. For our ADAS software platform, we work with Intel, NVIDIA, NXP, and Renesas. Based on our experience, for the most part, these elements are unique to each of the semiconductor platforms. Each one uses different CPUs and GPUs in new architectures, with algorithms running on highly optimized hardware accelerators capable of handling parallel workloads, delivering very high performance at very low power consumption. Some of these accelerators are specifically architected for algorithms performing functions such as object detection, classification, Vision/Radar/LiDAR processing, and deep neural networks for AI processing.

Developing the robot-brain algorithms

In a car, the robot-brain mimics what the human brain does. In order to “see” its environment, it constructs a model of our surroundings by acquiring raw data from sensors such as cameras, radar, LiDAR, ultrasound, and optical sensors (although human’s sensors are different). This massive amount of data must then be correlated and fused to extract objects of interest and create a complete and accurate view of the vehicle’s surroundings. Constructing a model of the surroundings could include recognizing static objects such as signs, curbs and lanes and performing precise vehicle localization. This is where high-resolution maps play a role. It may also need to recognize and classify moving objects, such as other vehicles and pedestrians, leveraging technologies such as vision processing and neural networking. Once the model of our surroundings is constructed, decision making at a variety of levels takes place - everything from high level destination planning to maneuvering across lanes of traffic. The final step is, as humans do in response to a stimulus, to act on that decision through adjustments to steering, throttle, braking, etc.

decorative image

 

 

decorative image

 

The algorithms behind these techniques are evolving rapidly. In fact, this is where Tier-1s and OEMs will have the opportunity to differentiate based on their unique “secret sauce”. To implement these techniques, one requires a balance of high-performance computing and fixed-function accelerators that, while leveraging the power-performance efficiency of specialized hardware, are flexible enough to allow programmability. This is where our semiconductor manufacturing partners are differentiating themselves.

The need for safety and security more than ever

Finally, these systems must be built with both safety and security in mind. The ADAS system performs mission-critical functions in a car. With people’s lives depending on those decisions, the stakes couldn’t be higher. Safety and security are closely related – one cannot have a safe system unless it is secure. They both require the application of industry best practices for safety, such as ISO 26262, and cybersecurity best practices such as the ones  articulated in our whitepaper titled “BlackBerry’s 7-Pillar Recommendation for Cybersecurity”. In addition, for software quality and security, one should consider BlackBerry Jarvis, a powerful Static Binary Application Security Testing (SAST) tool that will help software developers, tier-1s, and OEMs alike, analyze their binary files without access to source-code, and quantify and prioritize areas of risk to mitigate. BlackBerry Jarvis also provides security posture management, so that the effectiveness of a software security strategy can be measured over time.

decorative image

 

Moving to the production environment

Today in most ADAS R&D labs, prototype systems start out using Linux and Robot OS (ROS), leveraging consumer grade video cards, GPUs with parallel processing, and data center/server class processors. These prototypes are not cost sensitive and the software developers throw compute power at the problem as their primary tool to get the system working. Functional safety, automotive reliability and power consumption are not primary concerns here. That’s why, if you open the trunk of most autonomous cars, you see racks of computer cards running these prototype ADAS/autonomous algorithms.

As developers move to the production environment, they must deal with the following boundary conditions:

  • Functional safety and security are vital
  • Cost sensitivity
  • Automotive-grade reliability becomes a “must-have”
  • Constrained power budgets
  • Compute power augmented by co-processors
    • Image processing units
    • Accelerators for neural networks

Step one is to pick an SoC that matches their cost points, processing, and safety and security requirements.  Efficient use of processing resources is the key to getting the most from embedded SoCs.

Step two, the big challenge, is to map the algorithms running on PC and consumer video/GPU cards to production hardware platforms, which means mapping their algorithms to hardware-specific hard-wired algorithms of the SoC platform as well as the SoC specific CPUs, GPUs and hardware accelerators.

In order to support this transition to a production platform, the operating system (OS) and software platform that runs all of this software, must offer the highest levels of integration friendliness. As I mentioned earlier, most prototype algorithms are developed using Linux and ROS, hence full Linux compatibility and allowing reuse of the models originally developed on Linux is extremely important. That’s why the BlackBerry QNX RTOS offers full Linux compatibility and has ported a full implementation of ROS Core and several ROS packages, as well as common open source libraries used by ROS. BlackBerry QNX RTOS offers this compatibility via the highest level of POSIX compliance and allows re-use by supporting POSIX real-time profiles such as PSE51 and PSE52. Furthermore, we support a distributed configuration with support for popular networking protocols such as:

–     Connext DDS (from our partner RTI), a full implementation of the Data Distribution Service standard with tools for distributed system debugging, integration and testing

–     FastRTPS, an implementation of the wire protocol of DDS

–     DDS for Autosar

–     SOME/IP for Autosar

–     IEEE 1588 PTP and IEEE 802.1AS for clock synchronization

RTI Connext DDS is valuable for level 4 and 5 Autonomous Vehicle development. Both the new ROS2 design and the latest 18.03 AUTOSAR Adaptive standard specify DDS as a connectivity framework option.  DDS implements a “data-centric” approach.  It supports many languages and platforms, allows migration of software between ECUs, and provides system data to intelligent modules across the vehicle. Thus, it improves integration friendliness. RTI Connext DDS also supports both safety certification and security.

As with any system, choosing and re-using pre-certified components increases overall system quality, shortens development time, and reduces risk. We have worked with our silicon partners to map their hardware accelerators into our software platform. In addition, we work with multiple partners to provide support for their camera, radar, LiDAR, IMU and GPS sensors. We also support low-latency sensor data acquisition, sensor data record and playback, data visualization tools, and data forwarding to Mathworks MATLAB. In short, we ensure our platform’s integration-friendliness. Based on our firsthand experience, that’s what the automotive industry needs today.

decorative image

 

Integration friendliness is what really sold our platform to the customers I met last month. Both provided similar feedback - “with the BlackBerry QNX ADAS platform, we can leverage the software development work our team has done in the lab for the past 2 years, and we can now bring it into a production environment, without throwing away any of that hard work.”

For more information on BlackBerry QNX’s ADAS products, please visit us at http://blackberry.qnx.com/en/products/adas/index.