Mixing with Avid Pro Tools is just like mixing in any other medium… except when it’s not. This is to say that although the process is in most ways akin to how you’d work with any other storage and signal processing technologies, certain distinctive characteristics of Pro Tools lend themselves to somewhat different approaches than you may have previously encountered—and a few aspects of digital mixing may actually have no precedence in the traditional lexicon of production techniques.
This may sound perfectly reasonable, but entering the world of Pro Tools mixing can quickly become an unexpected and confusing mélange of contradiction, confusion, and outright frustration. Ask 10 engineers how to best go about mixing in the box, and you may get as many differing responses. Worse, it’s likely that much of what you’ll be told will neither be backed up with clear explanations nor hold up to thorough and thoughtful analysis. What’s a budding Pro Tools mixer to do?
The Dark Side of Music Production
The audio industry has always had a murky side to it with regards to matters like sonic quality, preferences about gear, and accepted practices for recording and mixing. Mythology has reigned supreme, and many “truths” are held as gospel for reasons that are largely unclear. An SM-57 is the best snare mic, period. Mixes come out better when monitoring with NS-10s. Analog sounds more natural than digital. Tube mics are superior to all others. Oh, and here’s another for you—the Pro Tools mix bus is inferior to an analog mix bus.
Where do these stories come from? There are no doubt a myriad of sources: the many real, legitimate experiences of intelligent and capable professionals; hasty conclusions based on partial or flawed observations by wide-eyed neophytes hoping to break into the business; a fair amount of marketing hype from audio equipment manufacturers; technical commentary made by individuals with no background to support such statements; years of an industry mindset of some that valued secrecy over sharing for fear of giving away personal tricks and techniques… the list goes on and on. What’s clear from observing these forces at work, and the resulting music industry zeitgeist, is that there’s both good information out there as well as a large number of shady beliefs. For the uninitiated, it’s hard to know what to think.
The whole thing is quite a slippery slope, because the final arbiter is hearing, and there’s no way to measure or compare what different people hear. Furthermore, numerous related factors—often unknown to the listener—might support a different conclusion about the basis of some phenomena that otherwise seems to have a simple explanation. What does it mean if a golden-eared engineer claims to hear a subtle artifact that you do not, and offers an accompanying explanation? It could certainly be that he/she truly has exceptional ears that are “better” (or more finely tuned) than yours, and has built a reasonable analysis from that observation. But it could also mean that he/she thinks there’s something there or wantsto hear it. It could also be that though there’s something going on, the explanation itself is off base. It’s very easy to fool your ears, and just as easy to jump to shaky conclusions even with the best intent.
It’s tempting to offer up the seemingly sage advice to just trust what you hear rather than blindly accept what you’re told. Sounds reasonable, right? But wait—this is exactly the sort of approach that’s caused such rampant confusion in the first place! When it comes to evaluating audio quality and understanding psychoacoustic phenomena, there’s only one way to develop meaningful conclusions—conduct double-blind tests in neutral, controlled environments, such that neither the listener nor the tester knows which options are being heard at any time. Only under these circumstances can you honestly and legitimately reach conclusions about subtle sonic issues. Otherwise, you’ll unfortunately have to be skeptical about both what you hear as well as what you’re told…
To Subscribe to our Music News Updates, Click Here
Analog Vs. Digital
One of the hot button issues discussed by people who like to talk about this sort of stuff is the difference between using the “mix bus” in a DAW versus using the summing network in an analog mixer (or, more recently, an external summing device designed strictly for this purpose). Actually, there’s no such thing in the world of digital audio as a hardware device with a common conductor fed by all channels in an analog console; Pro Tools and other DAWs use a mathematical algorithm for accumulating the values of multiple signals feeding a mix. But the algorithm does the same thing, so we’ll succumb to peer pressure and call it a mix bus.
But does a digital mix bus behave the same way as an analog version? Some users are convinced that otherwise identical mixes sound different—and better—when routed through the individual channels of a console or dedicated summing network. Many of these folks blame the DAWs’ mix busses, claiming some sort of inadequacy in the algorithm’s ability to accurately sum audio signals.
As the debate rages on, it can be very difficult to separate fact from fiction, and truth from myth. Since it’s a logistical challenge to create an accurate, unbiased test that compares mixes differing only in their summing methods, you’ll have a hard time researching this issue yourself. Fortunately, it’s possible to distill some simple conclusions amidst all of the chatter:
-
There is no evidence that the summing mechanism in Pro Tools—or any other current professional DAW—degrades or otherwise modifies the quality and character of mixes.
-
There can be audible differences between the sound of a mix created via analog versus digital summing. Depending upon the circumstances and the listener, these differences might be characterized as anything from negligible to significant. Typically, the difference tends towards subtle. In some—but not all—cases, producers and engineers prefer the results derived via analog methods.
If the digital audio mix bus is not responsible, what is? This is not understood definitively, but the explanation may be similar to why many other aspects of analog audio technology have a distinctive sound—the artifacts of analog audio that are inevitable byproducts of storage, transmission, and signal processing often act like sonic enhancers, injecting mixes with subtle flavors that to many sounds good. Interestingly, these manifestations of analog audio essentially reveal fundamental shortcomings of the technology, so it’s ironic that the effects can be pleasing. This is most certainly the primary justification for hanging on to more or less antiquated technologies such as analog tape machines, which at this point in time are a complete hassle to maintain and operate except for the fact that they yield desirable results under the right circumstances.
More specifically, how do listeners describe the differences between digital and analog summing? Some have commented on sonic characteristics involving tone, warmth, and detail. However, these are more likely based on related phenomena such as distortion caused by overdriving analog components. Others have noted differences in the width and depth of the soundstage. The actual foundation of such a distinction is unclear.
What’s the bottom line on how all of this affects you? Honestly, I wouldn’t give any of it a second thought. The fact is, until you can master the many other challenges of production—putting together great songs and arrangements, working with amazing musicians playing beautiful instruments, doing all of this in superior sounding recording environments using quality microphones and preamps, and building mixes with inspired balance, tone and depth—worrying about the nuances of digital vs. analog summing will probably distract you from far more important issues…
Latency Issues When Mixing
In general, latency in digital audio is a time delay observed for data at different locations of a system. Unlike traditional analog setups, in which everything occurs more or less instantaneously, various processes in a digital audio signal chain require small but detectable amounts of time to complete. The primary culprits are tasks such as conversion, disk access, and signal processing. Since each of these can themselves contribute measurable delays in handling audio, latency is cumulative when a signal is subject to multiple processes in series.
If you have ever recorded through a DAW, you have likely dealt with latency in the recording process. Delays can wreak havoc on a musician monitoring a live performance. We’ve seen that reducing the size of the hardware buffer and forgoing signal processing on live signals can improve monitoring latency to an acceptable level. Though converter latency cannot be eliminated, using higher sample rates does help.
Latency can also be an issue when mixing. However, in this case the problem isn’t that audio is delayed between disk playback and monitoring. Though this does occur, the only time it could matter is when theamount of latency varies on different channels. As long as latency is the same for all channels, the only repercussion will be a (typically) imperceptible lag when entering playback.
Jim Kerr is a tech entrepreneur that operates several businesses including Team Convergence, Assure Flight, Passion Highway and GRIDJunciton. He is an airplane pilot, PADI SCUBA divemaster, music producer and adventure traveler. Along with his wife Lisa, they travel North America in their 2020 Grand Design Momentum 397TH Toy Hauler with their cat Dexter.