This episode focuses on a phrase I say often on the show, humans are more perceptive to change than to constants. So far, this series on frequency has been in relation to 1 voice, but most podcasts feature at least 2 speakers. The fastest way to lose podcast listeners is to have a wild difference in loudness between the 2 or more speakers. However, we need to take this a step farther. While the podcast loudness standard is -16LUFS for stereo tracks (my podcast is always exported in stereo), -16LUFS can mean many different things. It's an average across all frequencies for an extended period of time.
Subscribe at
The entire series on frequency has been leading up to this point. The importance of the midrange and presence frequencies. In the human voice, they're entirely responsible for the intelligibility of our voice, and when we dig deeper, it's also the most important ranges because ALL output devices can represent these frequencies regardless of cone size, amount of drivers, quality, etc. Every device will (for the most part) accurately represent this frequency range, so what we actually should be doing when mixing our podcasts is soloing this range and matching the loudness of the mid and presence range between speakers. Here's a real life example, which is also described in the episode.
My podcast Reminiscent creates audiograms and posts then on our Instagram for every episode. The majority of Instagram users are listening to the audio from their iPhone speakers. These are very small speakers that can't accurately reproduce lower frequencies, but they CAN rock out some pretty tight midrange. Now, in past episodes of the podcast, my LUFS meter, the WLM meter from Waves told me that both my voice and my cohosts voice were sitting perfectly at -16LUFS. However, when listening to the audiograms from my phone, my cohost sounded almost twice as loud as me. I'm embarrassed to say it took me forever to realize why, but it was because his midrange and presence frequencies were MUCH louder than mine, but on average, across the entire frequency spectrum, I had more low end in my voice so the meter said we sounded about as loud.
The important thing here is "translation", and that is how a mix sounds across multiple devices. Through my earbuds and over the ear headphones, we sounded fine, because each of those devices were very close to my ears, forgiving a lot of the discrepancies in loudness, and because they could accurately reproduce the entire frequency spectrum, so the increased low end power in my voice made up for the loudness I was lacking in the higher midrange. However, the mix didn't translate to devices of lower quality who only boasted accurate reproduction of the midrange.
This is an issue not many people think about when mixing or processing their podcast audio, so it's an important lesson I'm hoping to introduce to the industry. How do we make small compromises in each signal in order to bring them closer together. Again, we notice changes more than static information. the constant back and forth of frequency representation between voices can become cumbersome and force our listener's ears to constantly be recalibrating to the sounds produced in our podcast. It's important to make sure our listeners don't have to work to listen to your show. Make it as easy on the ears as possible, and we can do that by providing them with consistent frequency curves among all the speakers in our podcast.
Find me online!
My Signal Chain
Hardware:
Audio Interface: Apogee Ensemble
Microphone: Shure SM7b
Headphones: Audio-Technia ATH-M50x
Earbuds: Klipsch R6i II
Studio Monitors: Yamaha HS7
Mic Stand: Rode PS1A Boom Arm
Software:
IzoTope RX6 Mouth De-Click
IzoTope RX6 Voice De-Noise
FabFilter ProQ3
Waves Vocal
Rider Waves CLA2A
Waves L2 Limiter
Waves WLM Meter
Waves Durrough Meter
*most of these links are affiliate links
Midroll Song: Road Trip by Joakim Karud
Closing Song: That Day by Joakim Karud
For more info, or to ask any questions, check out my website and reach out to hello@cleancutaudio.com
Comentarios