Planes of Space in Mixing – Part 1

The following excerpt is pulled unabridged from Zen and the Art of Mixing:
© All Rights Reserved

Planes of Space (Part 1 of 2)

In a stereo image there are planes of space that we use to create a four-dimensional image (the fourth dimension being time). All in all, I’ve determined that there are five basic planes of space replicated by two speakers: panning—left to right; frequency—up to down; balance—front to back; reflectivity—far to near; and contrast (dynamics)—sparse to dense.

Panning—Left to Right

It never ceases to amaze me how many people get wigged out by wide mixes. This is especially so in relation to their own music and despite an overabundant precedent to the contrary. There’s only so much space available to us in our mixing palette. Using the full spectrum of left and right is just as important as using the full spectrum of the frequency range. As far as I’m concerned, anyone who would consider using less than the full width of panning available to them might as well also consider using filters to cut off the very top and bottom frequencies of the mix.

Equally befuddling to me are those who believe that we should all be mixing in surround sound. The large preponderance of music is listened to in stereo, and until that changes dramatically I’m not mixing for four speakers (okay, five and a subwoofer) any time soon. For starters, I can’t tell you how often I walk into a store and end up switching wires on one of the speakers in order to put them in phase. Most people can’t get two speakers wired in phase. What then is the likelihood of your average punter successfully wiring four speakers in phase? At least with stereo they have a 50-50 shot at getting their speakers wired properly. With four speakers, they have a 50/50 shot times two, which means that if left to chance there’s a high likelihood that one of the four speakers will be wired out of phase. Besides, it’s difficult enough to find a household with an actual established center position in a proper stereo field—we’re going to erect four speakers in perfect symmetry how?

Let’s face it: Most people don’t sit down in a listening room to play their favorite music. They do, however, given the preponderance of boom boxes, computer speakers, ear buds, and cars, often find themselves in some reasonable vicinity of the stereo field. Therefore, there’s not much argument for using anything less than the entire field.

Panning is the most underutilized plane of space in a mix, particularly by neophyte mixers. It’s a mistake to not use the entire width of the stereo field, because you’re abandoning valuable space. For many professional mixers, myself included, the pan knobs are rarely anywhere but left, center or right, and there’s even a name for mixing like this—it’s called LCR mixing (which stands for left/center/right—go figure). This isn’t to say that there aren’t times to soft-pan parts, but in my experience, this should be a rare occurrence in most modern music mixing. Wide mixes are considerably more exciting since you’re using the entire width of your palette.

My recommendation is this: When in doubt, pan hard or don’t pan at all. Over time, you’ll figure out when to use internal panning positions. Background vocals are often a good candidate for this. Soft-panning a mono Rhodes in a predominantly guitar-driven mix can be a reasonable solution, particularly if the guitars are panned hard (as they should be). A single acoustic guitar is certainly best soft-panned in a guitar/vocal production. So there are certainly times to soft pan, it’s just that overall, hard panning is usually the better option.

Now, you might be thinking of several modern rock tracks that clearly don’t make use of the entire stereo field. This kind of mixing was en vogue for a period of time, but there’s really only one reason not to use the full stereo field, and that’s loudness.

Basically, loudness has to do with the amount of dynamic range that’s being used in a mix. The less dynamic range, the more the audio information is pressed up against digital zero, and the louder the mix will sound compared with other mixes at an identical monitoring volume. Anyone with a diverse library of music in their iTunes library or CD tray has noticed that some productions are inherently louder in comparison with others. When you find a mix that isn’t using the stereo field to the fullest, odds are that this was done purely for the purposes of loudness. The less width the mixer uses, the louder the mastering engineer can make the track.

So let’s think about this for a second. In the case of the ultra-loud modern rock track, not only has the mixer managed to avoid using the entire stereo field, but he’s also reduced his dynamic range down to nearly nothing. We only have so many tools for making an exciting mix. In this case, two of them have been removed right from the get-go. That better be one hell of a vocal and song.

Frequency—Up and Down

Just like we have width in a mix, we also have height, and this is an illusion that’s created by frequency.

Frequency is somewhat technical in nature, given that it’s basically explained with physics. I don’t really want to get bogged down too heavily in the science, but we need to touch on some basic information, particularly since EQ is the most used and abused tool of a mixer. Besides, if you’re going to think in terms of frequency, you need to have a basic understanding of it.

The human range of hearing, and I’m being exceptionably charitable here, extends from 20 Hz to 20 kHz. In my experience, most people can’t hear above 18 kHz, and even if they can, that ability will drop considerably over time. Anything below 20 Hz is nothing more than rumble and anything above 20 kHz tends to be nothing more than noise (hiss or spitting, take your pick). That doesn’t mean there isn’t useful information extending well above 20 kHz (many of us believe there is); we just can’t actually hear it directly, and so we can’t properly manipulate frequencies this high. The rumble caused by 20 Hz only serves to shake our woofers violently, which I can assure you isn’t great for their overall longevity.

High-frequency sound waves require less space to fully develop than long, drawn-out low-frequency sound waves. Consequently, higher frequencies are far more directional in nature. Consider a garden hose for a moment. When set to “jet,” the spray from the hose is focused and highly directional in nature. Very little spray goes to the left or the right. When the jet stream hits a hard object, it immediately reflects. When it hits a soft object (like a towel), it’s absorbed. This is precisely how high-frequency waves react.

When we adjust our hose to a gentler, wider setting, the spray goes everywhere. It’s not directional in nature, and it doesn’t reflect much, as it generally gets everything in the area wet. This is precisely how low-frequency sound waves react. This is why you can put a subwoofer just about anywhere in the room. The low end goes everywhere, traveling easily along (and through) walls and floors. High-frequency waves are directional in nature, and you have to actually be in the line of fire of the tweeters or horns to get their full brilliance.

Frequency also relates directly to music. All musical notes correspond to a fundamental frequency. For instance, the open E string on a bass has a fundamental frequency of 41 Hz. This is the loudest frequency that can be heard from a plucked E string. The tones we deal with from musical instruments are caused by vibration, which produces both a fundamental tone and overtones (or harmonics). The first overtone (which is the second harmonic) is always an octave higher than the fundamental. If the fundamental is 41 Hz, then the octave above it is double that, in this case 82 Hz. All harmonic octaves extending up are doubled in frequency. The next harmonic is a fifth higher than the first octave, and the next a fourth higher than that (which together equals an octave, and thus the third overtone on that low E is 164 Hz). The harmonics continue infinitely, and they can be mapped out in what we call the Harmonic Series. Since the fundamental is the loudest frequency on any given note, our brain perceives this as the actual note, but the harmonics are what provide us with the timbre of an instrument. Without the interaction of harmonics in sound, a piano would sound just like a guitar which would sound just like a sine wave.

Great skill is required for a musician to properly make an instrument sing. Everything from how the string is plucked, hit, or blown will affect amplitude (loudness), which also affects timbre (tone), which affects how we hear the harmonics, which affects how we perceive the note. Further complicating matters are the acoustics of the room in which the player is performing and how we perceive the reflections of sound from any given location.

Then of course, there’s recording technique, although if the player is generating beautiful sound in a great acoustic space, technique is greatly reduced as a factor. Put simply, the better the player’s technique is, the less important the recordist’s technique needs to be. A great player performing in a great-sounding room will be recorded easily and with little need for processing (barring some desired effect). A lousy player will tend to lack evenness in tone and balance, and will therefore require more processing in both the recording and the mix.

Equalizers (a.k.a. EQs)

As mixers and recordists we use EQ as our main tool for manipulating frequency; that is, as long as the frequency we’re seeking to manipulate actually exists. For instance, barring some proximity effect, there is little to no low-end information on a violin played in its uppermost register. Conversely, there is minimal high-frequency information on the low B from a five-string bass. Boosting 40 Hz on that ultra-high violin is going to give you nothing more than unwanted low-end ambience from the room. Boosting 16 kHz on the bass guitar is generally going to bring up unwanted line and string noise.

Much like the fundamental of a note, parametric EQs (digital or analog) deal in a fundamental frequency. There are two types of EQ adjustments on a parametric EQ—bell curve and shelf. On a bell curve EQ, the selected frequency is the center frequency within a specified range. The width of this bell curve is called the Q, and the wider the Q, the more frequencies will be affected by either a cut or boost.

Shelving EQ, of which there are only two—high shelf and low shelf—also affects a range of frequencies, but the selected frequency is the starting point, not the middle point. For example, a high-frequency shelf set to 10 kHz will affect all frequencies from 10 kHz up, regardless of whether you’re applying a cut or boost. Conversely, a low-frequency shelf set to 100 Hz will affect all frequencies from 100 Hz down. Most analog EQs will have a high and low shelf (usually also selectable as a bell) with one or two midrange bell curves. Many plug-ins, aside from those that supposedly emulate an analog piece of gear, will include a low and high shelf with four or five bell curve EQs.

Along with shelving EQ, we have filters at our disposal. There are two kinds of EQ filters: the high-pass filter (HPF), which allows the high frequencies to pass unabated, and the low-pass filter (LPF), which allows the low frequencies to pass unabated. An HPF set to 100 Hz will filter out all frequencies from 100 Hz down. An LPF set to 10 kHz will filter out all frequencies from 10 kHz up. I’ve seen some engineers, particularly outside of the United States, put HPFs on all channels as a matter of course. I would recommend against this unless there’s some low-end artifact from the room that you need to remove. Until you have the entire arrangement at your disposal, you generally shouldn’t be making a decision as to how far you want the bottom end to extend unless you’re absolutely sure you’re filtering out totally unwanted information.

Since any given note includes the harmonics above the fundamental, and since any given instrument will sound within a range of notes determined by the physical characteristics of the instrument—and the physical abilities of the player—an EQ will affect far more than a given note or even a range of notes. You can actually locate and boost particular harmonics on any given part.

DAW EQs usually provide the mixer with a visual representation of the frequency manipulation. This visual representation also represents how wide your Q is, and you can actually see the range of frequencies you’re affecting. While the visual modeling is useless for making your EQ decisions (those are made purely by ear), it’s quite useful in accelerating your understanding of EQ in general.

While every instrument has a range, and while a good arranger will select instruments that fit in certain ranges to achieve a certain frequency balance, it’s not so important for you as the mixer to have these ranges memorized. You don’t actually have to know what the top note of a cello is, as you can only deal with what’s provided to you. It’s pretty easy to tell that a cello fundamentally occupies low-end space while simultaneously offering the high-frequency grind of the bow against the string. Your EQ decisions will be made based on how a part works within the track, particularly where frequency is concerned.

For our purposes in mixing, we can break down frequency into four basic ranges: low end, lower midrange, upper midrange, and high end. I’ve compiled a list of some basic frequencies, how we hear them, and how they might affect your mix.

Low End

20–30 Hz: This is mostly rumble and/or subs. These are not frequencies you want to be actively adjusting unless there’s a problem that you wish to actually filter out, like air conditioner rumble. In general this is not a good range for applying an EQ boost.

30–60 Hz: These frequencies are quite low and “boomy” in nature, and will not replicate in most small speakers. This range should be considered when compromising between types of playback systems (i.e., a boom box or full-blown stereo system with subs). While this frequency range is particularly useful in making a mix sound big, it can easily overpower the mix in general if too abundant.

100 Hz: This frequency is low but punchy and is easily replicated in a six-inch speaker. It’s a far more focused low frequency than the subs, although it’s still not high enough to be considered directional in nature.

Lower Midrange

250 Hz: This is the start of the lower midrange. It can be described as “woofy,” as it’s not a very clean low end. Too much 250 Hz can cause a mix to sound thick, dark, and muddy. Too little of this frequency and your mix will sound scooped out and lacking in power. This frequency replicates well in a two- to four-inch speaker, and can be quite useful for making the bass audible in boom boxes.

500 Hz: This frequency is the middle of the lower midrange. It is often accused of sounding “boxy,” and for good reason—it sounds boxy!

750 Hz: This is getting toward the upper end of the lower midrange. This frequency is also boxy in tone and tends to reduce clarity in a mix; however, it can also add presence to a part in the right situation. If you find yourself cutting or boosting this frequency often, you either have a monitoring problem or your console has a natural buildup in this range.

Upper Midrange

1 kHz: This is the beginning of the upper midrange. It’s an exceptionally “present” frequency as it’s getting close to the peak of our hearing. This can be a very handy frequency for bringing out presence, but can also sound boxy if used too liberally.

2 kHz: This happens to be the basic frequency of a crying baby, which might explain why it’s our most easily heard frequency as humans. Too much of this frequency and “harsh” will be an adjective you’ll often hear when someone describes your mix. This frequency is at the upper end of the presence frequencies.

3–4 kHz: This frequency range, much like 2 kHz, is helpful in adding or removing bite from a recording.

6–9 kHz: This is the tail end of the upper midrange. This is where we exit the “bite” range and enter into dentist drill territory. This range of frequencies can give you a nasty headache, and quick.

High End

10–12 kHz: This is the lower end of the high-end frequencies (say that three times fast). The addition of this frequency range can be helpful in opening up a sound and/or offsetting the coloration of a microphone, or processing.

16 kHz: This is extreme high end. It will often add artifacts as quickly as it will open up a sound, but it can still be quite useful, depending on the overall quality of the EQ.

18–20 kHz: This range is beyond most of our actual hearing, and while there is definitely information that extends far beyond our hearing, bringing this range up with EQ only adds audible spitting and noise. Even if you can hear frequencies this high, you don’t want to aggressively boost this information.

Since different instruments occupy particular ranges of notes, by default, they also occupy a particular range of frequencies. For instance, a bass covers the low to lower midrange. Yes, we can boost 5 kHz on a bass and bring out some upper harmonics and string noise, allowing the note to cut more, but when playing in its normal register, the bass doesn’t fundamentally occupy the upper midrange space. Kik drums also cover the low to lower midrange, and although we’ve all heard plenty of clicky kik drums, those particular upper frequencies are transitory in nature.

Note duration has much to do with how much frequency space an instrument occupies. Since attacks on the bass and kik have a short duration, that momentary burst of high-frequency information occupies very little space in your mix. Of course, if you have a hundred double kick drums per minute, like on some death metal tracks, then the aggregate of that attack will take up quite a bit of space in your mix.

We are really dealing with only about eight octaves in music and mixing. Some instruments, like violins, have a limited range within those eight octaves. Other instruments, like piano, guitar, or Hammond B3 organ, work within that full range. As a result, pianos and organs can take up an enormous amount of frequency space within a mix. While full-spectrum instruments like keyboards tend to excel at filling in a mix, they also tend to eat up space. Some rock mixers actually choose to mute keyboard parts on a guitar-driven track as a matter of course. As I’ve said before, and will say again, I don’t recommend doing anything as a matter of course.

I always find it handy when explaining frequency and how it relates to a mix to discuss Beethoven. Clearly, Beethoven didn’t have a console. If he wanted more low end in his mix, he had to either write more low-end parts and/or direct the instrumentalists within that range to play louder. Beethoven used instrumentation and direction to give himself more bass. We also have instruments that fit within certain octave ranges at our disposal; we just control their balances electronically.

As important as it is to think along frequency lines in a mix, that’s only one consideration. If you feel your mix needs more high-frequency information, but the rhythm of the tambourine is clashing with that of the picking guitar, one of them has to go. Believe me, you often have to make choices like this in a mix. When you come across a part that’s causing you problems from both a frequency and performance perspective, the mute button just might be your best option.

The more buildup there is in a certain frequency range, the more difficult your job is as a mixer. For instance, if a song has a Farfisa organ, multiple guitars, and piano all playing in the same narrow middle range, you’re going to have a considerably harder time fitting them all into that space than if they’re spread out across the full spectrum. If there are two basses, a djun-djun, a low tom beat, and a cello, you’re going to have a hell of a time carving out enough space in the low end to avoid a muddy, undefined mess in the bottom of your mix. And if there’s too much high-frequency information in the mix, the mixer and subsequently the listener are going to find themselves exhausted in short order.

The power that frequency has over the listener should not be underestimated. How you use frequency information in a mix can have a direct bearing on how that mix makes the listener feel. How your parts fill the frequency spectrum in an arrangement can be just as important as how they fill their role musically. Keep this in mind as you mix.

Take me to Planes of Space in Mixing Part 2!

Zen and the Art of Mixing is the first book in my ongoing Zen and the Art of series. The digital versions (Kindle, eBook) contain loads of supplemental videos. You can purchase any and all of my books HERE.

If you’d like to discus these concepts further, join me and my knowledgable friends at Mixermania and The Womb Forums.

Be sure to read my newest book! #Mixerman and the Billionheir Apparent – a satire of the Modern Music Business through the prism of US Politics and vice versa.

Recommended Posts
  • Pingback: Best Writing Service()

  • Pingback: great non-fiction essays()

  • Pingback: Best Writing Service()

  • Peter Longfield

    Dear E.
    I am in the middle of reading “Zen…mixing” and enjoying it very much. I really appreciate how your advice is delivered through a story/philosophy-telling tale, not through numbered sections and subsections like in many ‘competing’ books.
    I am struggling though with one piece of advice. In several instances about panning / stereo filed placement, you advise to go either center or full-on one side (LCR), and to avoid generally “soft pan”. You mention acoustic guitar as an example in several of these iterations.
    I’m listening as I write to “Fight for your mind” – which I have been a fan of from its release :-) . In “By my side” for instance, which you discuss particularly in regards to the B3 balance, the B3 is on the left and the guitar on the right, but neither of them is completely left or right – to my ear they are both soft-panned. Similarly on “Burn one down” this time with a very dense Djembe on the left and guitar on the right, here again neither is completely sided. And so on for all the other tracks, I can’t really find any part that would be hard-panned.

    Please could you clarify further ??
    Thx in advance.

    • Hey Peter,

      Yes, there are times that one might decide to use internal pan positions, and I even say so in the following passage, quoted from the article above:

      “My recommendation is this: When in doubt, pan hard or don’t pan at all. Over time, you’ll figure out when to use internal panning positions. Background vocals are often a good candidate for this. Soft-panning a mono Rhodes in a predominantly guitar-driven mix can be a reasonable solution, particularly if the guitars are panned hard (as they should be). A single acoustic guitar is certainly best soft-panned in a guitar/vocal production. So there are certainly times to soft pan, it’s just that overall, hard panning is usually the better option.”

      The reason that I recommend LCR mixing, is not because I think that it should be treated as some sort of religion in which there are no exceptions. I recommend it because it gets one over the fear of hard panning. Once you’re over that fear, once you have no issues with putting anything anywhere in the stereo field, then your decisions as to using internal pan positions are based on reasons, not fear.

      In the case of the acoustic guitars on Fight For Your Mind, the vocal was performed at the same time. There was one mic place by Ben’s mouth, and another mic placed on the guitar, and there was also a Sun pickup. (I didn’t record it, but I do know how it was recorded.) So, as the mixer I had three signals. The vocal mic and the acoustic mic interact. In the case of Ben, who is an exceptionally animated performer (even while seated), the interaction is more prominent than usual.

      Of course, the Sun pickup doesn’t interact with the vocal mic at all, and so I did mix some of that in so as to reduce the amount of interaction between the mics, but despite being one of the better pickups on the market at the time, it still had the usual plucky distortion properties of an acoustic direct signal, and wasn’t ideal.

      Given these realities, when I panned the guitar hard, the interaction caused the vocal to pull a bit too drastically towards the guitar side (on the left in this case), and the comb filtering, and tonal shifts that were occurring made soft panning the acoustic a better decision. I even tried it mono, but decided that it seemed more natural panned slightly.

      There are times when I get acoustic vocals in which the vocal was performed as an overdub, that I also pan the guitar soft, not out of fear, but so as to keep the acoustic instrument attached to the singer.

      So there are all sorts of times and reasons to use internal pan positions, and as you progress through your career, you may choose to use internal pan positions more often than I might. So long as you’re not operating out of fear of the hard pan, there’s nothing wrong with that. Those are your artistic (and sometimes technical) decisions to be made.

      I hope that helps,

      Enjoy, #mixerman