Sonos very first AI Sound mode is a free upgrade to the ARC Ultra Soundbar, and it makes speech clearer than ever
- Advertisement -
- Sonos adds ai -speech improvement option to the arch ultra
- It is the first AI sound function of Sonos, with four speech levels Boost
- It has been developed with a charity to help people with hearing loss
Sonos has launched a new version of its speech improvement tools for the Sonos Arc Ultrawhich we assess as one of the Best Soundbars available.
You can still find these tools on the now play screen in the Sonos app, but instead of having just a few options, you now have four new modes (low, medium, high and max), all powered by the first use of the company of an AI sound processing tool. They must be available for all users today (May 12).
These modes have been developed in a year -old collaboration with the Royal National Institute for The Deaf (RNID), the leading charity of the UK for people with hearing loss. I spoke with Sonos and the Rnid to get the inner story about the development here – But you can read here for more details.
The update is launched today on Sonos Arc Ultra Soundbars, but will not be available on other Sonos sound bars because it requires a higher level of processing power, that the chip in the arch can offer ultra, but the older sound bars cannot.
The AI element is used to analyze the sound that the soundbar passes in real time and to separate the ‘speech’ elements of the sound, so that they can be made more prominently in the mix without influencing the rest of the sound too much. I heard it in action during a demo in the British product development facility of Sonos, and it is very impressive.
If you have previously used speech increases aids, you are probably familiar with hearing the dynamic range of the sound, and especially the bass, are suddenly massively reduced in exchange for the speech elements that are pushed more ahead.
That is not the case with the new Sonos mode – Powerful Bas, the overall soundscape and the more compelling Dolby Atmos elements are all much better maintained. That is for two reasons: one is that the speech is improved separately to other parts, and the other is that it is a dynamic system that only activates when it detects that speech is likely to be drowned out by background noise.
It will not activate if the dialogue takes place against a quiet background, or that there is no dialogue in the scene. And it is a system that works in degrees – it applies more processing in the busiest scenes, and less when the audio is not that chaotic.
How does it sound?
On the two lowest modes, the dialogue is more clearly selected without major damage to the rest of the soundtrack, based on my demo.
On the high mode, the background was still very well maintained, but the speech began to sound a bit more, and on Max I heard the background cut a little cut, and some more artificiality in the speech – but the speech was chosen extremely well, and this mode was only really designed for the hearing.
I said that the mode was developed with the RNID, where Sonos was consulting with sound research experts at the RNID, but also people with different shapes and levels of hearing loss led the modes to test and give feedback in different stages of development.
I spoke extensively with the Sonos Audio and AI architects that developed the new modes, as well as the RNID, but the most important collection meal is that the collaboration led to Sonos placing more emphasis on maintaining the compelling sound effects, and adding four levels of improvements instead of the originally planned three.
Despite the involvement of the RNID, the new mode is not designed to be exclusively for the difficult hear. It is still called just speech improvement, as it is now, and it is not hidden as a tool for accessibility – sound has improved for everyone, and ‘everyone’ now includes people with mild to moderate hearing loss. The low and medium -sized modes can also simply function for those of us who need a little extra clarity in busy scenes.
This is not the first use of AI-driven speech separation I have seen-I have experienced it Samsung TVs, and in a nice showcase of Philips TVs, where it was used to eliminate the commentary during the sport, but to retain the crowds.
@Techradar
♬ Original sound – TechRadar
But it is interesting that this is the first use of AI sound processing by Sonos, and the four-year development process, including a year of refinement with the RNID, shows that Sonos has chosen a well-considered approach for how it is not always visible in other AI sound processing applications. Here is my piece that Sonos’ AI and Audio developers interviews with researchers from the RNID.
It’s just a shame that it is provisionally exclusive to the Sonos Arc Ultra – although I am sure that new versions of the Sonos Ray And Sonos Beam Gen 2 Will go for a long time with the same improved chip to support the function.
Maybe you like it too …
- Advertisement -