AI Online

Ai INNOVATION, SINCE 1895

Addressing the sensor challenges of next-generation autonomous vehicles

Addressing the sensor challenges of next-generation autonomous vehicles

Ongoing improvements in sensor technology are helping engineers to enhance safety by connecting autonomous vehicles more closely to their environment.

These sensors are generating increased quantities of data, which has to be processed at the speed of thought. The Cadence® Tensilica® family of DSPs is supplying processor IP for next-generation designs that provide the performance and flexibility needed for continued algorithm development.

Automotive Industries (AI) asked Neil Robinson, Product Marketing Director, Cadence Design Systems what sensor challenges are facing the industry. Robinson: There is a strong need for higher levels of safety, in order for people to trust autonomous cars to be at least as good as human drivers. Today there are limited conditions (e.g., certain routes, good weather, daylight hours) where that may be the case, but we as an industry must get to the point where there are no limits before most people will feel confident to drive anywhere at any time.

This will not happen all at once— steps must be taken to improve what is already here and lessen the condition limits. Right now, we’re seeing the need for vehicles to better understand the surrounding environment through improved object detection, tracking and classification, so that the vehicle can take safe actions. The camera sensors are already available, but we need more reliable depth information from lidar sensors when the vehicle travels at high speeds. We need radar sensors for object speed information or in bad visibility. Radar sensors are already being used in autonomous vehicles today, but their resolution is low compared to cameras and lidar. The trend here is to increase the resolution by adding more antennas, so that it can resolve smaller angles as well as adding vertical range for a 4D point cloud – adding y (azimuth) to the x (horizontal), z (range) and s (speed) data that is currently calculated.

Lidar sensors have typically been large and consisted of expensive spinning mirrors, but the trend now is to build solidstate sensors, so that they become small and cheap enough to be deployed on every car – in at least the front-facing direction – for distinguishing between objects that are close together when travelling at high speed. The data from these sensors is used to find objects, classify them into known types and track them so that they can be monitored and predicted.

At some point, the objects identified from all the sensors can be used to determine the actions that may need to be taken by a central unit. There are multiple approaches to this “sensor fusion” for making these action decisions: Distributed: In which the sensors themselves will perform the detection, classification and tracking analysis, and pass the resulting object list to the central unit for decision-making. Centralized: In which the raw data from the sensors is passed to the central unit for it to perform the detection, classification and tracking of objects itself. Hybrid: In which the sensors perform all or part of the object detection, classification and tracking— but also pass the raw and/or intermediate data to the central unit for it to perform duplicate or further analysis; this ultimately provides multiple “opinions” on what is out there for a more robust decision-making solution.

AI: What do these radar and lidar sensor trends mean for technology providers, such as Cadence? Robinson: For radar sensors, the additional antennas for higher resolution and 4D data cubes may require more than 10X the processing throughput compared to what is seen typically today. The transmit antenna array needs beamforming to send out the pulse. Each receiving antenna requires processing to extract its raw data to work in combination with the other antennas in order to get the direction of any reflections. Finally, the resulting data cube is processed to cluster points into objects and then classify and track them. This leads to the need for more energy-efficient, high-performance processors. For lidar sensors, the processing algorithms are known today but are being run on large processors that consume too much power and take up too much space. In the migration to solid-state sensors, new laser steering mechanisms require control, and similar processing needs to be performed on the data itself, but in a smaller space and with less power budget. This also requires high-performance processors that are very energy efficient. For both radar and lidar sensor processing, we are seeing a strong need for higher accuracy calculations in parts of the processing chain – more than 16 bits of fixed-point data close to the sensors in the front end and more floating point for linear algebra operations in the back end, both single- and half-precision. Further away from the sensor itself, for supplying more reliable object detection, classification and tracking to the action decisionmaking process, we are seeing the deployment of more machine learning at these “edge” devices (whether in the sensor itself or the central unit).

This, too, requires high-performance AI processing in small power budgets so that they can be deployed in vehicles.

AI: How Is Cadence Addressing the Needs for These Trends? Robinson: Cadence Tensilica DSPs are frequently used in today’s radar sensor designs. The most popular Tensilica ConnX BBE32EP DSP offers customers configuration options that allow them to keep the size and power down by only including the acceleration logic they plan to use in their algorithms. Some customers choose to add differentiation quickly by creating custom instructions that make their algorithms execute more efficiently, rather than adding accelerators outside of the processor that usually take much longer to design and verify. For accessing any accelerators in the system, wide point-to-point interfaces are being added to interface directly, with predefined latencies rather than via system buses with their limited width and unpredictable latency—saving cycles and power. We at Cadence have been following the industry trends closely for the design of our newest DSPs. For machine learning, we announced the Tensilica DNA100 processor in September 2018. At the end of February 2019, we announced the latest ConnX B20 and B10 DSPs as the new high-end additions to our Tensilica ConnX DSP family for sensor processing. Compared to the Tensilica ConnX BBE32EP DSP at just under 1GHz, the ConnX B10 DSP has the same 256b vector and load/store width but is faster at over 1.4GHz on a 16nm TSMC process, and has many of the ConnX BBE32EP DSP’s configuration options built in. The ConnX B10 DSP optionally provides native 32bx32b MACs with 2X more 16bx16b MACs (32B), with vector floating point, single precision (SP), 2X single precision (SPX) and half-precision (HPX) support. The ConnX B20 DSP has twice the vector and load/store width compared to the ConnX B10 DSP at 512b, but otherwise has the same clock speed and options for designs that need the highest throughput. These new DSPs and their options are designed to directly support the needs of the sensor trends we are seeing above: Higher performance: From faster clock speeds and reduced cycle counts (32B and SPX options). More energy efficiency: From doing the same functions in fewer cycles (32B, SPX and HPX options). More floating point: 1.2X to 3X cycle improvement over SP and new HP support (SPX and HPX options). More calculation accuracy: More than 16b fixed-point and HP now available (32B and HPX options).