Engineers seek solution to varied volume of satellite-fed programs

Print More

A recent NPR study confirmed that what many have surmised for years is true: Public radio shows sent through the Public Radio Satellite System vary widely in loudness.

An NPR working group that has been studying the issue found that roughly 53 percent of the content they examined deviated from standards PRSS recommends to keep volumes consistent. The group is looking at creating new best practices and implementing a software fix that could cheaply curb the problem.

Volume levels

This chart created by NPR engineering staff shows the widely varying volume levels of shows sent through the Public Radio Satellite System.

“It’s a big issue in the system,” said Paxton Durham, chief engineer at Virginia’s WVTF-FM and Radio IQ. “I’ve been here 24 years, and as long as I can remember there’s always been a problem.”

Research cited by NPR found that anything more than a 4-decibel change in volume can prompt listeners to adjust volume levels. A change of more than 6 decibels can cause them to change to another station.

“If we are not consistent across the system, it’s not going to be a good experience for the listeners,” said Chris Nelson, NPR’s director of digital strategy, during the  NPR board’s May meeting.

In a presentation to the board’s distribution and interconnection committee, Nelson described how a group of NPR engineering staff analyzed the loudness of more than 6,000 files, including NPR newsmagazines and underwriting messages; programs from American Public Media and Public Radio International; and additional PRSS content.

“Basically, if it was available, we tested it,” Nelson said. And engineers were surprised at the degree of fluctuation in the sound levels of PRSS content.

“We have a real issue here,” Nelson said.

A mashup of PRSS content presented by Nelson, along with a visual representation of the sound spectrum, backed up his point. Radiolab and On the Media, for example, were significantly louder that Marketplace Tech Report and This American Life. Some of the quietest content was from two syndicated music services, Classical 24 and PubJazz.

The discrepancies were also discussed during the annual Public Radio Engineering Conference in March. Most engineers knew that PRSS volumes were inconsistent but had underestimated the extent of the problem, said Shane Toven, director of engineering for Wyoming Public Media, who attended the conference.

“It is something that needs to be addressed,” Toven said. “It’s all over the map.”

Engineers at the conference seemed to support solutions that would not apply punitive measures such as fines, according to Toven.

NPR’s team examining the problem plans to create a new target level for PRSS content based on the use of loudness meters. It will also work on creating a list of affordable tools producers can acquire to measure volume.

Most producers track volume with peak meters — colored bars that rise and fall with noise levels. Loudness meters, however, measure volumes numerically, which NPR’s engineers said increases accuracy. The meters cost as little as $100 and as much as thousands of dollars, Nelson said.

8 thoughts on “Engineers seek solution to varied volume of satellite-fed programs

  1. Part of the problem is that instead of mixing in a studio, with proper speakers and soundproofing, a lot of content is being mixed on headphones…which gives a completely skewed perspective of “loudness”.

    • What makes the studio speaker experience of a mix any more or less valid than the headphone experience of a mix? I figure you’ve gotta create a mix that sounds good everywhere.

      • Adam, theoretically it’s possible to mix on headphones in a manner that’ll sound good over speakers, but it’s exceedingly hard to do. The sonic characteristics of headphones are vastly different because they’re much closer to your ears (near-field effect) and they’re in a quasi-isolated environment (cups providing partial soundrproofing to the outside world).

        The former gets to how a speaker so close to your ears doesn’t have enough physical room for lower-frequency sounds to blend properly. Headphones can, and do, skew the frequency response to compensate and sound “normal” to your ears. But if you mix to that instead of a speaker, it’ll sound weird on the speaker. Especially since human ears don’t have a flat frequency response; we hear certain freqs better than others.

        The latter means you underestimate how detrimental undesired sounds (e.g. road noise when listening in a moving car) are to certain parts of your mix.

        I’m being somewhat picky here. Sure, with really sound-rich pieces it can be noticeable on its own. But with simple ax-n-trax (actualities/sound bites plus a reporter’s commentary/story) the difference in mixing between headphones and proper monitor speakers isn’t big enough to be a deal-killer. The problem is more that things are already pretty screwed up to begin with when it comes to levels, and “mixing with headphones” problem is just piling one more thing on top of it.

        • Yeah, I’m very familiar with these issues Aaron. What I’m asking you is, why do you think mixes should be mixed to sound good on speakers as opposed to headphones? With the growth of podcast and mobile livestreaming, a lot of our audience is listening on headphones these days. What I’m arguing (and I’m writing a piece for Current about this right now, that’s why I’m teasing this out with you) is that mixes need to work on both headphones and speakers. They need to work in isolation and they need to work in competition with noise floors (like car noise). They need to work on hifi systems and tiny shower radios. They need to sound good everywhere. When music engineers are mixing and mastering an album, they try it out on all kinds of different systems to make sure it works everywhere. It’s just like coding a website: your design needs to look good on the latest, greatest version of Chrome while also looking good on the crappy old version if IE that everybody has at their office.

          • Less than 7% of NPR content is consumed over the web; over 92% of NPR content is consumed via AM & FM radio. Presumably most of that is either in the radio or on a radio in the office. In either case, it’s not on headphones and it IS in a very noisy environment.

            It’s literally not possible to optimize a single method of mixing for both headphones and speakers. It’s inherently a tradeoff and, as things currently stand, mixing for headphones is trading off in the wrong direction.

            That said, shows can…and arguably should…be mixing differently for content destined for web delivery vs content destined for airing on AM/FM stations. Obviously this is not always a realistic possibility, but it would be more ideal.

            FWIW, I personally would not use music engineering/mixing as a baseline for public radio. The vast majority of music tracks are compressed within an inch of their lives with a ton of distortion in the process. You look at the volume bandwidth and it’s practically a horizontal line. Ugh. In general public radio is among the best, if not the best, about letting their content “breathe” with some dynamic range.

            Also FWIW, we’re drifting far off the original point (and I concede my original post itself was guilty of this) and that while mixing with headphones doesn’t help…the real problem is that individual shows have not invested the time and money in finding a universal standard for loudness and adhering to it. Admittedly, there hasn’t BEEN a good loudness standard to adhere to in the first place, something that really only changed in the last few years. And the rise of digital mixing over the last ten years has made for a misguided but understandable obsession with peak meters to avoid digital distortion; even though peak meters tell you almost nothing about actual loudness.

            So I’m hopeful that analyses like the one this article (and the session at PREC/NAB) talks about will compel shows to start “getting on the same page” when it comes to loudness.

          • I don’t know about outside of public radio, but there’s a big push going on right now to move all of public radio to measuring audio via LUFS (introduced in EBU R128, as you mention) as per the ITU-R BS.1770-3 standard. PRSS is spearheading it and you can get the details here:

            http://prss.org/loudness

Leave a Reply

Your email address will not be published. Required fields are marked *