When it comes to audio and creating a good sound, everyone will tell you "just trust your ears". It's a blanket statement that really means "spend the next 30 years figuring this out, because this rabbit hole is deeper than you could possibly imagine". This becomes exponentially more difficult when we realize our audio is lying to us, ALWAYS!
Building on what we know about frequency from last week, we're now exploring all the things that impact how we perceive these frequencies in every day life. From the shape of our ear and how it perceives loudness along the frequency spectrum, to the way our microphone capture sound, the way our speakers translate that sound at different decibel levels, and how our room colors the sound. We can never be sure that what we're hearing is actually real! It's a huge problem, one that takes a long, long time to understand, then diagnose, and properly handle.
Let's look at some of the things that impact your sound...
Subscribe at
The Fletcher Munson Curve
The first thing we need to understand is that our ear hears different frequencies at different volumes. The graph above may be difficult to wrap your hear around instantly, but each red line shows the output required for each frequency to perceived as equally loud. For example, for a 100Hz tone to be perceived equally as loud as a 1kHz tone being played at 60dB, you will need to play back 100Hz at ~72dB for both tone to be as loud to your ear. This shows that our ears are very insensitive to very high and very low frequencies, and EXTREMELY sensitive to frequencies between 4-6kHz. The other important thing to notice is that the louder we listen to all the tones, the flatter the curves get.
Microphone Frequency Response
Every microphone will capture a source differently and represent the frequency spectrum uniquely. As I've mentioned in countless podcasts and videos, you must properly pair your unique voice to the unique frequency response of a microphone. Each person's voice is unique, and the frequencies represented in their voices are unique. For example, I have an over abundance of 7-8kHz in my voice, so I was careful to choose a microphone that doesn't add any more of that frequency to the signal. In fact, I went with the Shure SM7b because it has a dip in that range!
In the episode, I demo'ed 5 cardioid dynamic microphones that range from $100-$400. My voice sounds insanely different on each microphone, displaying the special pairing of the frequencies in your voice to the frequency response of your microphone.
Monitor/Headphone Frequency Response
In the same way that microphones have unique sonic characteristics, every pair of earbuds, headphones, and studio monitors has a unique frequency response curve. There's a lot to say about mixing on headphones v. monitors, but that's an episode and article for a different time. The main point that's made in the episode is the difference between consumer and reference devices.
"Modern" output devices like Beats, Bose, or JBL headphones and speakers tend to accentuate the low end and high end of the frequency spectrum. While we might like the bump the music in our car at nauseating levels that cause our chest to ache, this isn't the sound we want when we're mixing or processing out audio. It gives us an "untrue" idea about what our audio actually sounds like. If our speakers add a ton of bass, we can improperly EQ our signal to remove a lot of the muddiness that comes with "enhanced low end response". What then happens is when a listener is using an output device without the enhanced bass response, there's suddenly a massive lack of low end and depth.
Professional reference headphones and monitors strive to achieve a truly flat signal, leaving the audio uncolored and unaffected by the speakers themselves so the person in charge mixing the signal knows they are hearing the audio as it is, not how the consumer grade equipment wants you to think it sounds.
Monitoring Loudness
As if the unique sounds of every set of monitors or headphones wasn't enough, the frequency response of each of these devices change as your increase the volume! If you look back to the Fletcher Munson Curve, you see that as you increase the volume of the sounds, the flatter the response curve gets. Not even mentioning harmonic distortion and phasing... the sound changes with the volume. So what do we do about that? The best thing we can do is try to compare apples to apples. Meaning, every time you mix audio, mix at the same volume.
We see a rather flat frequency response at 90dBSPL, but this is MUCH too loud to comfortably listen to anything for a long period of time, so something like 75-80dB is a pretty decent compromise. Here's how we calibrate for loudness...
You can download a free decibel meter for any smart phone. Stream a podcast that's known to be very well produced, something like NPR or a Gimlet show, and turn up the volume on your monitors until its in that 75-80dB range, prioritizing your comfort. On your interface's output knob, stick a piece of tape on the knob itself and on the interface next to the knob. Draw a straight line off the knob and onto the interface. Whenever you are mixing audio, make sure both segments are lined up. This will ensure that any time you're mixing audio, you'll always be listening at the same output level, depending on your signal strength in your DAW, which should be -19UFS for mono and -16LUFS in stereo.
(these output settings were chosen at random and are most likely not going to give you the perfect output levels)
Your Environment
How many times can I say it? Sound takes the shape of its container. If a room is small, reflective, and untreated, the sound coming from your studio monitors will be several colored by the untreated room. As the signal bounces from wall to wall to wall to wall thousands of times per second, these waves will cross through each other and their reflections, summing themselves together, causing artificial boosts or dips in some frequencies. An untreated room will make it hard to hear the audio over the sound of the room! You can oftentimes find yourself fighting the sound of your own space on both the input and output ends of your recordings.
Reverberations will cause recordings to sound boomy, and mixing the signal in that same room will cause it to sound boomy and muddy all over again, which can lead to EQing out too much of the low end, causing the voice to sound thin and lacking when heard in a better treated room.
YOU!
This is what the whole thing is working up to! YOU are in control over how your audio sounds! We can use an equalizer to adjust the amplitude or specific frequencies and frequency ranges, but we can't do this accurately unless the devices and environment we are mixing the audio in is accurate!
Next week we'll dig a little deeper into what these frequency ranges sound like on my voice, and we'll learn about the importance of the midrange frequencies, so make sure you're subscribed to the show!
Find me online!
My Signal Chain Hardware: Audio Interface: Apogee Ensemble Microphone: Shure SM7b Headphones: Audio-Technia ATH-M50x Earbuds: Klipsch R6i II Studio Monitors: Yamaha HS7 Mic Stand: Rode PS1A Boom Arm Software: IzoTope RX6 Mouth De-Click IzoTope RX6 Voice De-Noise FabFilter ProQ3
Waves Vocal Rider Waves CLA2A Waves L2 Limiter Waves WLM Meter Waves Durrough Meter -Save 10% off the plugins above with this affiliate link from Waves!- *most of these links are affiliate links Midroll Song: Road Trip by Joakim Karud Closing Song:
Future Funk by Joakim Karud
http://www.joakimkarud.com
For more info, or to ask any questions, check out my website and reach out to hello@cleancutaudio.com
Comments