Automotive audio designers and engineers have been given complete creative freedom to deliver new and exciting in-vehicle sound experiences through the separation of audio and acoustics software development from vehicle hardware.
QNX® Sound enables automakers to consolidate all audio and acoustic functions seamlessly and cost-effectively within the main software stack of the software-defined vehicle architecture, using pre-integrated and pre-tested core technologies.
This includes a library of automotive acoustics functional modules for services such as telephony, safety alerts, noise reduction, sound enhancement, and media playback. The software foundation provides more control over quality and functionality of these services using advanced tuning tools with a graphical programming interface that unlocks endless signal processing creativity, according to the company.
Automotive Industries (AI) asked José María Marín, Global Director, QNX Sound how software-defined audio (SDA) is set to redefine the in-vehicle acoustic experience for drivers.
Marín: I believe that the cloud is about to transform the way OEMs build and deploy software in the car.
If you look at how software for the car has traditionally been built, the automotive industry is very mechanical in its thinking. Designers think about their mechanical needs and requirements before the software which is what gives life to the machine.
That paradigm is changing, and designers are starting to look at what software can do using existing hardware. QNX® Sound is the only platform on the market that is 100% ready for that. It is already pre-integrated with the operating system that everybody’s using for virtualization in the cockpit and a variety of other applications, and it already runs in the cloud.
It runs in the cloud on top of QNX, which means it is the exact same stack you’re going to bring in production on a given hardware.
Designers do not need to be waiting for hardware. They could be working on the next generation cockpit experience for the OEM from a beach house in the Bahamas. The do not need to be in a lab, or even know which processor the OEM will select.
AI: How can this change customer experiences of cars?
Marín: Imagine your car is a theater. You wake up one morning, and there is a message inviting you to a new Star Wars experience. You accept the evening experience, starting at, say, six.
It begins when you open your car door and hear the sound of the Millennium Falcon opening and the voice of Han Solo inviting you in. Inside, the car has been transformed by light and sound into a Star Wars theme. Your daily commute has become the vehicle for a completely new entertainment experience.
The GPS shows you the way to a restaurant with a Star Wars themed dinner and drinks (booked for you), and afterwards you attend the premiere of the latest Star Wars movie.
Naturally, there is a return to earth theme on your way home. It’s maybe a bit crazy to think of such a scenario right now, because we think of as cars as people movers.
But, already in China we are starting to see the possibilities of what creative people can do when you provide the software capability to create such experiences. Or what if you are an international artist who wants to do something different to launch a new album?
You know that it will sound best in a car, so you make it available only to in-car systems for the first month. An OEM could purchase the rights for a month and allow it to only be played in their own models.
Technically, with QNX Sound, configuring the software is super easy to do. From a technology perspective, all the sound equipment is already in the car. Both scenarios are opportunities for OEMs to diversify their revenue stream.
The challenge is building a business model around it.
AI: What will get the OEMs to use value-added services as a revenue stream?
Marín: Having worked in audio and acoustics for a while, I strongly believe that music and entertainment – and music quality in particular – is one of the main things that people value in their cars.
Studies have found that people listen to music more in their cars than at home or anywhere else. It makes sense when you realize that the car is a unique environment for listening to music because your head is always in the same position, and you probably spend a lot more time in your car than in your living room if you commute.
Advanced audio experiences are being made possible by the capacity of the computing platforms, and the lower cost of speakers and screens.
We showcased the opportunities in a demo car we built for the Audio Engineer Society Conference in Gothenburg, which we then took on a road trip through Europe, and which we’ll have at CES in Las Vegas this January.
Built in collaboration with Dolby Laboratories, Dirac and McIntosh Group, it demonstrates how OEMs can deliver the most advanced audio experiences to customers without having to rely on expensive dedicated smart amplifiers.
All the functions run in the same CPU as all the other sound applications run. And all we utilize is 2% of the CPU capacity. Once the CPU is programed, it is like an app. You can activate it, deploy it, or switch it off – depending on what the customer is willing to pay for.
AI: What about the other audio functions?
Marín: QNX Sound is not only audio playback. It includes all four categories of acoustics in the car. Media playback functionality controls everything to do with media sound in terms of music or a movie, from low to high-end setups.
Voice treatment refers to everything that has to do with voice phone calls in car communication and, increasingly, voice assistance. Then, it is noise control – reducing irritating sounds generated by the likes of the tires on the road, engine, traffic and wind.
OEMs also want your car to alert you to close the door, release the handbrake, change the oil, overheating or fasten your seatbelt, and more. Electric cars need to alert pedestrians to their presence. QNX Sound has integrated all these functions, so you have all functions ready to be tuned and adapted to your vehicle in a single application – and without the need for an amplifier.
Another advantage for OEMs is that most already use QNX, so the procurement process is already in place.
AI: What if OEMs are committed to other platforms?
Marín: We do not lock customers in our own processing platforms because all our libraries are universal and interchangeable. If you don’t want our noise cancelling for handsfree telephony, no problem. Take the one you want, and we integrate it on our platform.
We are working with a number of third-party partners to bring their functionality into our platform, and we have a long list of people waiting to come on the platform. It is an open platform, and anybody can come and participate.
AI: On which hardware can it run then?
Marín: That brings up another super interesting topic, which is portability. Traditionally, chip suppliers have used proprietary interfaces, which means that you cannot take a piece of software and simply move it to another silicon supplier.
We and our partners at Google are pushing for an open industry standard called VirtIO, which will mean that software is portable between silicon platforms. Having a system which separates hardware and software will free developers to work on applications independent of the silicon platform.
Many of the bigger platforms already support VirtIO, and it is inevitable that the rest will follow.
AI: What are the cost savings?
Marín: I like to look at savings from a holistic perspective. It is not simply how much cabling and hardware you save by consolidating all your audio software in one head unit processor. You save on lost production. Some OEMs are still experiencing a chip shortage.
The next cost reduction opportunity is reducing integration requirements. All the functionality is already built in, so you do not have to run cabling from one box to another and make sure they talk to each other.
Another major cost-saving opportunity is on variants. You can have different levels of amplifiers, noise cancelling, and so on in the same stack. With hardware-based systems you need different pieces of metal, which has its own implications in terms of packaging. So, you save on all the additional inventory and planning that goes into different variances.
With QNX Sound the variants are just software configuration. Of course, you will need different speakers and microphones to raise or lower the received sound quality, but they all go to the same central computing platform.
And then, of course, is the over-the-air updates. Once you have all the functionality on a single operating platform, pushing updates, making changes, offering features on demand, providing trials, and turning on and off features, becomes way easier. We have already done this with an OEM
So, it is very difficult to quantify savings in numbers. When I start explaining the big picture, it becomes obvious that you can save on cost, effort and complexity, which is the main OEM pain point that we see. Programs are delayed by software and hardware complexity.
We need to reduce complexity, and QNX Sound does exactly that.
AI: How does your licensing model work?
Marín: It is very simple. Our core business is licensing. We have an access fee when we first enter into an agreement, which is a one-time charge for the OEM or Tier 1 to acquire all the necessary software, hardware and tool.
Then we have license fees for every vehicle that runs on our software. We also offer services such as customization. Many OEMs prefer to do the customization themselves because they want to create their own signature sound.
More Stories
Classiq, Deloitte Tohmatsu, and Mitsubishi Chemical Compress Quantum Circuits by Up to 97%
Freestyle Partners Expands Mobility Patent Portfolio to Disinfect Vehicles Using Far-UVC Light
36-Channel Automotive-Compliant Linear LED Driver from Diodes Incorporated for Display and Lighting Applications with Fault Reporting and Diagnostics