Synopsys, a USD 2.7 billion company, is actively expanding its diverse portfolio of automotive-specific IC design and verification tools, automotive-grade IP, automotive software cybersecurity and quality solutions and automotive lighting tools, with the goal of accelerating customer’s time to market and enabling the next generation of safe, secure and smarter autonomous cars.
Recently, the company introduced a validated built-in-self-test (BIST) and repair IP solution to enable designers to achieve the most stringent levels of functional safety for automotive system-on-chips (SoCs). It also launched its LucidDrive® software product which includes a new feature for simulating pixel light technology and allows designers to improve the performance of automotive headlamps for night driving. Synopsys works with many automotive OEMs, Tier 1 and Tier 2 suppliers and recently announced a partnership with the French Alternative Energies and Atomic Energy Commission (CEA), a key player in technology research. The new partnership is based on Synopsys ZeBu® Server-3 emulation solution and aims to advance both organizations’ initiatives in automotive SoC and system design.
Automotive Industries (AI) asked Gordon Cooper, Product Marketing Manager at Synopsys what the value of electronic systems is expected to be by 2022, and what share will be made up of advanced driver assistance systems (ADAS).
Cooper: We presented a webinar with IHS Markit in which they estimated that the average value of electronic systems per car is expected to grow to over US$1600 by 2022—that’s a nearly 35% increase in under 10 years. Advanced driver-assistance systems (ADAS) will represent a significant part of this growth. Certainly, there is a trend toward increasing both the number of cameras to support ADAS in automobiles (1-2 front, 1 rear, 4-6 for 360 view, 1 in cabin) and the resolution of those cameras is expected to increase from 1MP to 3-4MP to 8MP at 60fps.
AI: How is artificial intelligence (AI) evolving in the car?
Cooper: AI can be simply defined as human levels of intelligence exhibited by machines. Today, AI is enabling advance driver assistance systems for lane correction, driver drowsiness alerts, brake assist, forward collision warning, etc. With 94% of accidents caused by human error, the hope is that AI capabilities will evolve to a point where its error rate is significantly better than a human’s. Deep learning techniques, which is how AI is implemented, are already proving to be more accurate than conventional computer/machine vision techniques. AI is making driving safer and there will be no self-driving cars without AI.
AI: What hardware and software requirements and safety concerns are there in automotive systems with a high degree of AI?
Cooper: Safety is always a concern in the automotive market. Any AI that goes into a vehicle will have to meet stringent safety standards such as ISO 26262 and appropriate automotive safety integrity levels (ASIL) such as ASIL B or ASIL D. The unique challenge for software engineers with deep learning is that you “train” the neural network that enables object detection, you do not “program” it. Software engineers are not programming algorithms to detect a pedestrian – they are instead training a neural network to learn how to detect a pedestrian. Training a neural network requires large data sets that must be developed or acquired. Hardware designers are tasked with developing future-proof designs, as this is a dynamic area of research and hardware systems must be able to support the fast turnaround between a research paper and hardware implementation that the industry expects. Last, these software and hardware changes increase the pressure on validation teams to ensure that the deep learning algorithms are delivering the expected results.
AI: Why and how should automotive OEMs and chip designers leverage AI, deep learning, and convolutional neural networks (CNNs)?
Cooper: The “why” is simple. Deep learning algorithms have significantly better accuracy for tasks such as object detection. If you are in a self-driving car, you want the best techniques in play to detect things you might collide with. To increase power efficiency in vision applications, choosing an optimized neural network engine or accelerator will provide significant power reduction over the use of GPUs. Using a CNN engine that is optimized for a specific task will give you the best optimization of performance, power, and area.
AI: What are some of the implementation options for OEMs and Tier 1s to balance cost, power, area, performance, and future-proofing?
Cooper: Implementing deep learning in embedded applications requires a lot of processing power with the lowest possible power consumption. Processing power is needed to execute convolutional neural networks – the current state-of-the-art for embedded vision applications – while low power consumption will extend battery life, improving user experience and competitive differentiation. To achieve the lowest power with the best CNN graph performance in an ASIC or SoC, designers are turning to dedicated CNN engines.
GPUs helped usher in the era of deep learning computing. The performance improvements gained by shrinking die geometries combined with the computational power of GPUs provide the horsepower needed to execute deep learning algorithms. However, the larger die sizes and higher power consumed by GPUs, which were originally built for graphics and repurposed for deep learning, limit their applicability in power-sensitive embedded applications.
Vector DSPs – very large instruction word SIMD processors – were designed as general-purpose engines to execute conventionally programmed computer vision algorithms. A vector DSP’s ability to perform simultaneous multiply-accumulate (MAC) operations help it execute the two-dimensional convolutions needed to execute a CNN graph more efficiently than a GPU. Adding more MACs to a vector DSP will allow it to process more CNNs per cycle, improving the frame rate. More power and area efficiency can be gained by adding dedicated CNN accelerators to a vector DSP.
The best efficiency, however, can be achieved by pairing a dedicated yet flexible CNN engine with a vector DSP. A dedicated CNN engine can support all common CNN operations (convolutions, pooling, elementwise) rather than just accelerating convolutions and will offer the smallest area and power consumption because it is custom designed for these parameters. The vector DSP is still needed for pre- and post-processing of the video images.
AI: What role can Synopsys play in helping OEMs and Tier 1s to meet these challenges?
Cooper: To produce the promised autonomous cars, OEMs need to develop, or at least play a larger role in specifying, the chips that will implement AI in their vehicles. With its broad portfolio of automotive SoC design and verification solutions and automotive-grade IP and over 30 years as an industry leader, Synopsys is in a unique position to help them do that.
For example, to support implementing machine vision and deep learning in automotive applications, we offer the DesignWare® EV6x Embedded Vision Processors. These processors are fully programmable and configurable IP cores that have been optimized for embedded vision applications, combining the flexibility of software solutions with the low cost and low power consumption of hardware. For fast, accurate object detection and recognition, the EV Processors integrate an optional high-performance convolutional neural network (CNN) engine. By integrating this proven embedded vision processor IP in their SoCs, OEMs, Tier 1s and Tier 2s can accelerate development of their automotive systems.
More Stories
Some Ways How Motorists End Up in Collisions at U-Turns
Maximise Margins with Proven PPF Tactics
Finding the Car Boot Release Button – Tips and Tricks