Mixing with Avid Pro Tools is just like mixing in any other medium… except when it’s not. This is to say that although the process is in most ways akin to how you’d work with any other storage and signal processing technologies, certain distinctive characteristics of Pro Tools lend themselves to somewhat different approaches than you may have previously encountered—and a few aspects of digital mixing may actually have no precedence in the traditional lexicon of production techniques.

This may sound perfectly reasonable, but entering the world of Pro Tools mixing can quickly become an unexpected and confusing mélange of contradiction, confusion, and outright frustration. Ask 10 engineers how to best go about mixing in the box, and you may get as many differing responses. Worse, it’s likely that much of what you’ll be told will neither be backed up with clear explanations nor hold up to thorough and thoughtful analysis. What’s a budding Pro Tools mixer to do?

The Dark Side of Music Production

The audio industry has always had a murky side to it with regards to matters like sonic quality, preferences about gear, and accepted practices for recording and mixing. Mythology has reigned supreme, and many “truths” are held as gospel for reasons that are largely unclear. An SM-57 is the best snare mic, period. Mixes come out better when monitoring with NS-10s. Analog sounds more natural than digital. Tube mics are superior to all others. Oh, and here’s another for you—the Pro Tools mix bus is inferior to an analog mix bus.

Where do these stories come from? There are no doubt a myriad of sources: the many real, legitimate experiences of intelligent and capable professionals; hasty conclusions based on partial or flawed observations by wide-eyed neophytes hoping to break into the business; a fair amount of marketing hype from audio equipment manufacturers; technical commentary made by individuals with no background to support such statements; years of an industry mindset of some that valued secrecy over sharing for fear of giving away personal tricks and techniques… the list goes on and on. What’s clear from observing these forces at work, and the resulting music industry zeitgeist, is that there’s both good information out there as well as a large number of shady beliefs. For the uninitiated, it’s hard to know what to think.

The whole thing is quite a slippery slope, because the final arbiter is hearing, and there’s no way to measure or compare what different people hear. Furthermore, numerous related factors—often unknown to the listener—might support a different conclusion about the basis of some phenomena that otherwise seems to have a simple explanation. What does it mean if a golden-eared engineer claims to hear a subtle artifact that you do not, and offers an accompanying explanation? It could certainly be that he/she truly has exceptional ears that are “better” (or more finely tuned) than yours, and has built a reasonable analysis from that observation. But it could also mean that he/she  thinks there’s something there or wantsto hear it. It could also be that though there’s something going on, the explanation itself is off base. It’s very easy to fool your ears, and just as easy to jump to shaky conclusions even with the best intent.

It’s tempting to offer up the seemingly sage advice to just trust what you hear rather than blindly accept what you’re told. Sounds reasonable, right? But wait—this is exactly the sort of approach that’s caused such rampant confusion in the first place! When it comes to evaluating audio quality and understanding psychoacoustic phenomena, there’s only one way to develop meaningful conclusions—conduct double-blind tests in neutral, controlled environments, such that neither the listener nor the tester knows which options are being heard at any time. Only under these circumstances can you honestly and legitimately reach conclusions about subtle sonic issues. Otherwise, you’ll unfortunately have to be skeptical about both what you hear as well as what you’re told…

To Subscribe to our Music News Updates, Click Here

Click to Subscribe to our Newsletter

Analog Vs. Digital

One of the hot button issues discussed by people who like to talk about this sort of stuff is the difference between using the “mix bus” in a DAW versus using the summing network in an analog mixer (or, more recently, an external summing device designed strictly for this purpose). Actually, there’s no such thing in the world of digital audio as a hardware device with a common conductor fed by all channels in an analog console; Pro Tools and other DAWs use a mathematical algorithm for accumulating the values of multiple signals feeding a mix. But the algorithm does the same thing, so we’ll succumb to peer pressure and call it a mix bus.

But does a digital mix bus behave the same way as an analog version? Some users are convinced that otherwise identical mixes sound different—and better—when routed through the individual channels of a console or dedicated summing network. Many of these folks blame the DAWs’ mix busses, claiming some sort of inadequacy in the algorithm’s ability to accurately sum audio signals.

As the debate rages on, it can be very difficult to separate fact from fiction, and truth from myth. Since it’s a logistical challenge to create an accurate, unbiased test that compares mixes differing only in their summing methods, you’ll have a hard time researching this issue yourself. Fortunately, it’s possible to distill some simple conclusions amidst all of the chatter:

  • There is no evidence that the summing mechanism in Pro Tools—or any other current professional DAW—degrades or otherwise modifies the quality and character of mixes.

  • There can be audible differences between the sound of a mix created via analog versus digital summing. Depending upon the circumstances and the listener, these differences might be characterized as anything from negligible to significant. Typically, the difference tends towards subtle. In some—but not all—cases, producers and engineers prefer the results derived via analog methods.

If the digital audio mix bus is not responsible, what is? This is not understood definitively, but the explanation may be similar to why many other aspects of analog audio technology have a distinctive sound—the artifacts of analog audio that are inevitable byproducts of storage, transmission, and signal processing often act like sonic enhancers, injecting mixes with subtle flavors that to many sounds good. Interestingly, these manifestations of analog audio essentially reveal fundamental shortcomings of the technology, so it’s ironic that the effects can be pleasing. This is most certainly the primary justification for hanging on to more or less antiquated technologies such as analog tape machines, which at this point in time are a complete hassle to maintain and operate except for the fact that they yield desirable results under the right circumstances.

More specifically, how do listeners describe the differences between digital and analog summing? Some have commented on sonic characteristics involving tone, warmth, and detail. However, these are more likely based on related phenomena such as distortion caused by overdriving analog components. Others have noted differences in the width and depth of the soundstage. The actual foundation of such a distinction is unclear.

What’s the bottom line on how all of this affects you? Honestly, I wouldn’t give any of it a second thought. The fact is, until you can master the many other challenges of production—putting together great songs and arrangements, working with amazing musicians playing beautiful instruments, doing all of this in superior sounding recording environments using quality microphones and preamps, and building mixes with inspired balance, tone and depth—worrying about the nuances of digital vs. analog summing will probably distract you from far more important issues…

Latency Issues When Mixing

In general, latency in digital audio is a time delay observed for data at different locations of a system. Unlike traditional analog setups, in which everything occurs more or less instantaneously, various processes in a digital audio signal chain require small but detectable amounts of time to complete. The primary culprits are tasks such as conversion, disk access, and signal processing. Since each of these can themselves contribute measurable delays in handling audio, latency is cumulative when a signal is subject to multiple processes in series.

If you have ever recorded through a DAW, you have likely dealt with latency in the recording process.  Delays can wreak havoc on a musician monitoring a live performance.  We’ve seen that reducing the size of the hardware buffer and forgoing signal processing on live signals can improve monitoring latency to an acceptable level. Though converter latency cannot be eliminated, using higher sample rates does help.

Latency can also be an issue when mixing. However, in this case the problem isn’t that audio is delayed between disk playback and monitoring. Though this does occur, the only time it could matter is when theamount of latency varies on different channels. As long as latency is the same for all channels, the only repercussion will be a (typically) imperceptible lag when entering playback.

Without inserts, there’s nothing in the signal flow of an Audio track that introduces latency.

Without inserts, there’s nothing in the signal flow of an Audio track that introduces latency.

When latency is the same on multiple tracks, signals arrive at the mix bus simultaneously, and will not interfere with one another.

When latency is the same on multiple tracks, signals arrive at the mix bus simultaneously, and will not interfere with one another.

Having seen that it’s not unusual for audio signal paths to exhibit latency, when is the amount of channel latency different? When mixing, there are two possible scenarios:

    1. DSP plug-ins always introduce a small (and occasionally not so small) amount of latency.

    2. Native plug-ins sometimes introduce latency, if their algorithms utilize look-ahead processing.

    3. Analog hardware inserts always introduce latency due to the conversions necessary to route the signal to the external gear and back to Pro Tools.

    4. Digital hardware inserts sometimes introduce latency, if the external hardware’s algorithms utilize look-ahead processing.

      Different signal processors on channels. In some cases—though not always—latency will be introduced due to channel inserts: 

  1. Different routing used for channels. When sending the output of certain tracks through a bus and subgroup (as we’ll demonstrate next week), those tracks will be slightly delayed.

    In this example, bass signals enter the mix almost a millisecond later than simultaneous snare signals. However, it’s highly unlikely you’ll hear this difference.

Here’s the tricky part: the only time this matters is when the relevant tracks contain in part or completely the same material. This would be the case if:

  1. You’re summing together a processed and unprocessed version of the same signal. This can be a nice technique when you want the sound of (typically) extreme processing but also want to maintain some of the original signal characteristic.

  2. You’re processing a track for an instrument that was recorded with multiple microphones. Let’s say you’re compressing a snare mic. Since the snare sound is also picked up by the overheads and other drum mics, the phase relationship between these tracks changes if the compressor exhibits latency.

    Here, snare signals enter the mix later than the drum overheads. Since the overheads also pick up the snare, phase cancellation can occur.

In these scenarios, the combination of a processed track with latency along with similar tracks without such delays generally results in some sort of phase cancellation.  Phase cancellation is generally not desirable, but all is not lost. Fortunately, it’s possible to compensate for latency so that you can implement any of the above setups without interference problems.

If you’d like to explore the issue of latency further, here’s an informative primer on the topic written by Digidesign: Latency and Delay Compensation with Host-Based Pro Tools Systems

Floating- vs. Fixed-Point Mathematics

Another topic of some dispute is the computational approach used in various DAWs to crunch numbers for signal processing. It turns out that software developers might implement different methods, depending upon factors such as hardware support, ease of coding, and portability. Some DAWs utilize floating-point math, in which data is represented in a manner resembling scientific notation. Others do fixed-point math, using a prescribed number of integral and fractional digits. There are those who feel that floating-pointmath is superior since it is more flexible and can convey a wider range of values than fixed-pointcomputation given the same number of digits. However, the resolution of floating-point representation decrease as the values increase, and the noise floor also varies. Ultimately, proper coding should make these issues insignificant. If you don’t believe me, maybe you’ll listen to Colin McDowell, founder and president of McDSP, one of the leading developers of third-party plug-ins for Pro Tools and other DAWs:

“If the fixed vs. floating point question is in regards to algorithm quality, the difference should be negligible. The code can be created on either platform to equivalent specifications. As long as the input and output bit depth are equal to (or greater than) the bit depth of the files being processed, and the arithmetic processing is double that bit depth (i.e. double precision), output signal noise level can be made to be lower than the smallest value representable in the output format.”

In the past, Pro Tools HD required TDM hardware whose DSP chips utilized 24-bit inputs and outputs, while RTAS signal processing supported by host-based systems was based on 32-bit floating point arithmetic. Both were capable of producing high-quality results, but some users felt that subtle differences could be identified between the two methods. What could have explained these perceptions? Was this another example of mind over matter, or were there differences in coding methods that could yield audible disparities?

Fortunately, as of Pro Tools 10 the above issue is of academic interest only since all computation is now performed via 32-bit floating-point math. By design, the DSP chips used in HDX hardware do floating-point arithmetic just like the signal processing in host-based sytsems, so (essentially) the same algorithms can be used for both native as well as DSP processing. It’s nice to know that any given plug-in should sound the same regardless of the platform, and the use of 32-bit floating-point math also provides a mamouth amount of headroom that makes it virtually impossible for users to overload computation engines.

To Subscribe to our Music News Updates, Click Here

Click to Subscribe to our Newsletter