Imagine a processor that powers not only a car’s infotainment center, but also enables safety features such as traffic sign recognition, blind spot detection, lane-departure warnings as well as driver-alertness monitoring and self-parking.
NVIDIA, known for creating powerful parallel processors that run the world’s fastest supercomputers, has created just such a processor—the NVIDIA Tegra X1 superchip.
The company has long been working with automakers to computerize their cars. It estimates there are over 7.5 million cars with its processors on the road today. The Tegra X1—with 256 supercomputer-class GPU cores, advanced graphics potential, and improved power efficiency—is the basis of the NVIDIA DRIVE car computers, unveiled at the 2015 International Consumer Electronics Show in January.
NVIDIA DRIVE comes in two varieties. NVIDIA DRIVE CX is a cockpit computer capable of driving a range of screens in the car from HUD and instrument clusters, to infotainment and passenger entertainment. DRIVE CX can additionally integrate ADAS features such as Surround Vision.
NVIDIA DRIVE PX is an auto-pilot computer designed to be the foundation of tomorrow’s self-driving cars. DRIVE PX can handle up to 12 camera inputs and process 1.3 gigapixels of information per second. These capabilities are critical for powering computer vision-based deep learning systems that can recognize and learn to identify a great number of classes of objects all around a car.
The NVIDIA Tegra X1 and its predecessors are believed to be the only mobile chips that support NVIDIA CUDA—the industry’s most innovative GPU computing language. This parallel processing technology, combined with the modular form factor of the NVIDIA Tegra Visual Computing Module, gives automakers a unique platform for building a wide range of infotainment, cluster and ADAS features into their vehicles.
The architecture’s modular design approach gives automakers the flexibility to upgrade hardware in cars already on the road. Overthe-air software updates also can help them close the technology gap between consumer electronics and in-vehicle systems.
According to NVIDIA, the newest Tegra processors are based on the same computing architecture that powers the U.S.’s fastest supercomputer, the Titan system at Oak Ridge National Laboratories, as well the world’s most efficient supercomputers. Moving forward, the company expects more vehicles will be using its Tegra processors to handle a wide range of applications in the car.
Automotive Industries (AI) asked for Danny Shapiro, Senior Director of Automotive at NVIDIA, how he sees the Tegra processor impacting the development of vehicle infotainment systems and ADAS.
Shapiro: The newest generation of Tegra processors is a breakthrough in mobile computing for the car. Packing the same computing power of the world’s fastest supercomputer from the year 2000 onto a chip the size of your thumbnail, we can now power advanced computer vision systems inside the car. The self-driving prototypes with their trunks full of computers are going to rapidly be replaced by compact NVIDIA DRIVE systems. Automakers are turning to these solutions to power their next-generation digital cockpit and auto-pilot systems. Automakers are turning to these solutions to power their next-generation digital cockpit and deep learning based auto-pilot systems.
AI: NVIDIA says its latest Tegra processor can run both in-vehicle infotainment as well as ADAS systems, such as surround vision. What does that mean for automakers in terms of cost-saving and system complexity?
Shapiro: NVIDIA’s automotive visual computing solutions are ideal for infotainment and driver assistance systems because high-resolution graphics and fast performance are two keys to delivering a high-quality experience to customers. By using a Tegra processor to manage both systems, automakers will be able to design a more efficient solution overall.
AI: Give us some examples of automotive manufacturers that are using the Tegra processor to develop these systems.
Shapiro: We can only discuss programs that our customers have already announced. One example that will showcase the type of performance our processors deliver is the Audi piloted driving system, dubbed zFAS. Based on our Tegra K1 processor, this system will enable a traffic jam assist system, which allows drivers in heavy highway traffic to remove their hands from the steering wheel and feet from the pedals. Using cameras and other sensors fed into the Tegra-powered system, the car will be able to stay inside the lane markings and maintain a safe distance from the car in front, accelerating or braking as needed.
Another feature of the zFAS system will be automated parking. In some situations, the driver will be able to exit the vehicle and then, using a key fob or smartphone, instruct the car to park itself. Again, using the various sensors on the vehicle, the Tegra processor will act as the brain of the vehicle, interpreting real-world data to determine what is happening around the car. The car will park itself, and then the driver will retrieve the vehicle in the same manner by using the key fob or phone.
AI: What about updates?
Shapiro: Our Tegra automotive solutions are fully programmable. Therefore, software updates from the OEM can be applied throughout the life-cycle of the car, giving customers improved features and performance during the ownership period.
Cars will be able to get better over time as new applications or features are added. Our partner Tesla Motors has been employing this strategy since the Model S began shipping more than two years ago. These over-the-air software updates have added numerous new capabilities to the vehicle, without ever having to bring the car back to the showroom or repair shop.
AI asked Geoff Ballew, Director, Advanced Driver Assistance Systems (ADAS) at NVIDIA, to describe how the ADAS surround vision feature works.
Ballew: Surround vision uses multiple cameras on the car to create a virtual view of the environment around the car. Tegra processors can warp and stitch these images together so you get a view as if a single camera were above your car and looking down. It can even adjust the view if you want to tilt the camera to one side or another to help you park in difficult situations.
AI: How effective is the multiple camera views stitched together to in providing surround views?
Ballew: Surround vision is a huge help if you’re trying to park in a tight space or even just to help you center the car between the parking lines, which you can’t see from the driver’s seat because they are too close to the sides of the car. Advanced image processing is the key difference between an image that looks like a real camera is hovering over your car looking down, versus a distorted image from a fish-eye camera.
AI: How do you see the NVIDIA’s Tegra processor changing the face of in-vehicle infotainment and ADAS?
Ballew: Tegra combines advanced camera interfaces with image processing horsepower that enables the use of highdynamic range cameras with megapixel resolution. This is like HD for your television and gives you crisp, clear images even with bright sunlight or dark shadows. Tegra’s ISP and GPU can also perform computer vision functions on the videos to detect objects and provide even more information to the driver.
AI: What are some of the ADAS features that you think will be particularly groundbreaking over the next 12 months?
Ballew: We see a revolution in computer vision and deep learning coming. Surround view today is limited to premium automobiles and even then can add significantly to the price of the car. Costs will come down as some OEMs combine this feature with navigation systems, so customers get both features for a lower total price. Parking assist capabilities will grow over the next 12-24 months with self-parking cars that can finish the parking task even after everyone has gotten out of the car.
More Stories
Some Ways How Motorists End Up in Collisions at U-Turns
Maximise Margins with Proven PPF Tactics
Finding the Car Boot Release Button – Tips and Tricks