Frequency Primer

Frequency Primer

© 2018 All Rights Reserved

Hey Everyone!

I’d like to share with you a section from my upcoming book Musician’s Survival Guide to a Killer Record. Frequencies are the building blocks of music, and when it comes to implementing EQ you first need to understand frequency and how to think about it in terms of the music itself. I spend quite a bit of time discussing frequency in this book. Here’s your primer into the subject.

Frequency

As you all will recall from physical science class, frequency is produced through vibration. Pluck a string and it vibrates. Hit a drum, it vibrates. The faster the vibration, the higher the frequency, the higher the note.

Now, we measure frequency in cycles per second, which we represent as Hertz, named after Heinrich Hertz who proved the existence of electromagnetic waves. We write the expression as Hz, and for multiples of 1000 we use the term kHz, or kilohertz. Sometimes we get lazy and write the letter K to indicate kHz. I’ll stick to the proper terminology for the purposes of this Guide.

The human range of hearing extends from 20 Hz to 20 kHz, but if you’re an adult, good luck hearing anything over 18 kHz. That doesn’t mean there isn’t useful information extending well above our range of hearing; we just can’t actually hear it directly.

The low E string on a bass guitar sounds at 41 Hz, which means the vibration of that string cycles 41 times every second. The A above middle C on a piano cycles 440 times every second for 440 Hz. And the top note on a violin, which is E7, cycles 2637 times per second, which would be expressed as 2.6 kHz (with rounding). That’s right, the top note on a violin is 2.6 kHz. No wonder it’s so annoying. That happens to live in the most present range of our hearing, which also happens to be the fundamental frequency of a crying baby.

An octave above any note is twice its frequency. So, an octave above A4 at 440 Hz is A5 at 880 Hz. An octave above that is A6 at 1760 Hz. All the notes and thereby frequencies in between the octaves define the scale by dividing the octave into relatively equal parts. I say relatively, because we need to compromise a little on certain notes in the scale for purposes of tuning, and that’s called tempering. We use 12 steps in the modern Western scale.

All instruments have a fundamental frequency range, and anyone who composes for an orchestra is acutely aware of these ranges, particularly in terms of notes. For instance, a cello has a four octave range that extends from C2 to C6. In terms of frequency, that would translate as a range of 65 Hz to 1 kHz. And while the overtones will extend far above 1 kHz, that is the frequency that we will perceive the loudest by far when C6 is bowed.

Some instruments like pianos and keyboards, acoustic guitars, and even drums, fill an enormous swath of the frequency range. As such, they tend to occupy considerable space in a production. You can only get away with so many parts living in the same frequency range before you get masking, which is exactly as it sounds. You’re masking certain frequencies of one part with the common frequencies of a louder part. In other words, you can combat some masking issues partly with how you balance the parts.

The further away in frequency range two instruments are, the less masking will be an issue. An egg shaker is never going to mask an 808 kik drum or vice versa because their frequencies don’t cross. The 808 kik lives down around 60 Hz, and an egg shaker’s fundamental lives at about 6 kHz. That’s a seven octave differential.

Where the problems arise is when you combine parts that predominantly occupy the same ranges. The most common example of this would be the kik and the bass. Those two instruments require some attention in order to derive clarity between them. And if you combine a sub-frequency synth bass with that 808 kik drum? Not only will you get masking, you’ll likely get beating too.

Beating occurs when two notes are ever so slightly out of tune with each other. If you’ve ever tuned a guitar then you’re familiar with the sound. That’s how you know two strings are in tune with each other. The beating stops. The lower the frequency, the slower and more violent the beating. So, if the 808’s low-end bloom occurs at a similar frequency to the synth bass, you could very well get some obvious beating artifacts. This is a tuning issue. Not an EQ issue.

As much as it’s good to avoid too much masking, you can’t avoid it entirely nor would you want to. You risk too much clarity, and that can actually make us feel uncomfortable. Parts will generally cross frequencies in all but the simplest of arrangements. But the more information that you cram into the same frequency range, the more difficulty you’ll have with clarity, and the more EQ you’ll require to aggressively carve out and shape the parts such that they can all be heard.

Just to be clear, I have no issue with aggressive use of EQ. But if you consider frequency as you arrange your record, you will have less instances in which you need aggressive EQ. Which means there will be fewer times in which you are dealing with sound rather than the music.

Note duration makes a big difference where masking is concerned. The kik drum and a bass cross frequencies, but the kik has a short duration, which provides us the space that we need to derive some clarity between the two parts. This is true even if the bass is playing whole notes. But if that kik sustained for a few seconds like a long 808 kik, combining that with whole notes from the bass could be considerably more challenging.

A great arranger is judicious in her instrumentation, as well as her chord voicings, and will consider frequency as readily as she will the rhythmic, melodic, or harmonic functions. Of course, we don’t seek to fill the full frequency range at all times. There would be no contrast if we did.

I mean, if you want a heavy feeling of foreboding, one way to do it is to drop your arrangement down to just your low-end instruments. If you want a lighter feeling, dump the low-end instruments and add some lilting flutes in the upper midrange. We can directly affect how the listener feels by how we employ frequency in our production.

We choose our instrumentation for many reasons, including availability and musical function, but it’s critical to also take frequency into account. A tambourine doesn’t seem the best candidate for a hard rock track with a wash of aggressive cymbals and shredding guitars. You need more high-end information, why? It would make far more sense to place your rhythmic overdub in a frequency range that has some space available.

Instruments like the B3 organ can take up an enormous amount of space, or just a little. You don’t have to have an organ with all stops at full, nor do you have to use both hands. And if you have a glut in the lower midrange, then you could do well to voice your chords accordingly. This is why it’s so important to view frequency as space. Because it is.

You can’t get nearly the low end out of a dense track as you can a sparse one. In fact, if you put too much low end on a dense track, it will sound like pure mud. Conversely, it’s far more difficult to overshoot the low end on a sparse track. Frankly, you’d be downright foolish not to exploit that.

Enjoy, #mixerman

Mixerman favicon

Frequency Primer

Get your copy of Musician’s Survival Guide to a Killer Record

Order the paperback at Amazon US

Order the eBook at the Kindle store for just $9.99 (Read from any device or computer)

Order the paperback at Indiegogo (Only $25 for all International buyers)


Recommended Posts

Leave a Comment

Contact Us

If you'd like to reach me directly, feel free to send me a note through this handy portal. It will go directly to my inbox.

0

Pin It on Pinterest

Share This