My Parallel Processed Bass Rig Part 1: Why Parallel Processing?

I’ve wanted to write this post for a long time. It’s a sort of reply, or alternate take on Johnny Ragin, AKA, Worship Sound Guy’s YouTube Video, “My Super Weird Trick for HUGE BASS.” I’d like to take the approach from a bass player’s point of view. If you’d like, you can watch it below. If you don’t want to watch it, keep scrolling.

After looking at the date, I realised Johnny posted this video a year ago. Where did the time go? It’s a cool video that explains a lot and does a great job of explaining it in terms a layperson could understand. A layperson is a sort of kinder gentler way to describe a novice. Johnny shows how to do this in Pro Tools, THEN he turns around and does it on a Behringer X32. The cool thing is that this trick is pretty easy to pull off on any digital console, and can be super simple in analog world too.

So What is Parallel Processing?

I would define parallel processing as splitting an audio signal into two or more paths with separate processing applied to each signal path. If that sounds scary to you, chances are you’ve done this before and not even thought about it. On an audio console any time we use an auxiliary send to add reverb to a vocalist, we are adding a parallel processing path. The vocalist’s voice goes straight into the console’s channel strip and splits at an aux send. The voice continues through the channel strip; equalisation applied, and the level controlled via the fader. The aux send sends the signal to a reverb processor. That reverberated signal gets mixed back into the console on either another channel, an aux return, stereo return, or something like that. Usually, if you mute the reverb, you can still hear the vocalist.

But Why Though?

Why not parallel processing? Everybody else is doing it. (Peer pressure is not a good reason to do something, by the way.) Chances are, every single bass you’ve heard on a recording is parallel processed. I don’t have stats for that, but if I had to guess, I would say at least 95%. I would also venture to guess that if you’ve been to a concert, especially nationally touring acts, they also process their basses in parallel. Here are a few reasons why.

Why It Works for Clean Bass

Clean bass tone is the anchor of a lot of music. . Several bassists such as Marcus Miller, Tony Levin, Nathan East and others have built entire careers off of clean tones and virtuosic playing. Some people would define this as a "hi-fi" tone. You can get a lot of low end out of a clean rig. So you may be asking yourself, "what's the problem? Don't we want low end?” The answer is a resounding yes.

The problem with a clean bass tone is, that, believe it or not, it can get lost in the mix. The fundamental note of the bass drum can compete with it. In modern music basslines played by synthesisers, or coming from backing tracks fight with the bass guitar for space in the mix. 

We can work around this. Add a second bass channel with a distorted amp, or pedal. Mix back and forth to taste. Distortion adds harmonics. The harmonics start to move into the midrange frequencies, which can make the bass easier to hear and differentiate from the synth or backing tracks.

Why It Works for Dirty Bass

Distorted, or fuzzy bass can sound really cool. You’ve heard it. Pretty much every rock song of the 1970s has a gritty sounding bass. Muse is an excellent example of a band that uses distorted bass well. They’re a three-piece band, so sometimes the bass player adds distortion to the bass to sort of taking over what a rhythm guitar player would do. By taking over the rhythm guitar parts, the bass player frees the guitar player/vocalist up to play leads and sing.

Unfortunately just adding distortion or fuzz to bass can sometimes eat the low-frequency stuff that bass usually plays. It can thin out the tone and make it almost sound like another electric guitar. That’s great if that is your goal. If not then splitting the bass signal and running it through a clean amp, and a distorted amp can bring that missing “oomph” back. If you need more “oomph” simply blend in more of the clean amp.

Why It Works for Other Effects

The reason parallel processing works for basses with effects on it is the same reason it works so well for distorted or fuzzed bass. Any time we add an effect like chorus, flanger, phasers, envelope filters, or whatever, we can lose some of that oomph that our instrument has. This is especially apparent in the low end. By splitting our signal and always having a straight, “clean” tone available, we can negate that loss by mixing what we need back in.

My Parallel Processed Bass Rig Part 2: The Gear” is up next where I give a little information about the gear in my bass rig.

Ghost in the Machine: Me Vs. a Dante Network

A few months ago, I think it was early March, I went to help a church get some problems with their Dante audio network sorted out.  (Yes, I need to write more often- don't worry,  Mom says the same thing.) They currently have a Yamaha M7CL with two Dante-MY16-AUD cards connected to a Cisco switch, which is connected to another switch in their production room. The two Dante cards handle 16 channels of audio each, so that they can get all 32 channels to their recording room. Their production machine is a Mac Pro running Apple's Logic software.

In full disclosure, let me make the following statements:

  1. I should have payed more attention in NET 125 Networking Basics. But the material was super dry and let's face it.  That's not the sexy side of playing with all of this awesome audio gear. Am I right?
  2. I know people that have fairly complex Dante networks that are working beautifully. For example there is a local college that is cramming audio through their regular infrastructure. Meaning there are multi-channel strings of audio swimming in the same stream with college students streaming Downton Abbey, The Walking Dead, Mad Men, Breaking Bad and Walker Texas Ranger. It works.
  3. Dante works beautifully, and is almost completely "plug and play" on consoles that run Dante natively (that don't need an expansion card) like Yamaha's new CL series consoles and Rio stageboxes.
  4. I typically dislike thing that take a lot of effort to set up. Let me get to what I came to do as quickly as possible. Which is usually mixing.

The Problem

In this particular case, the problem was that the same audio data would show up on channel 1 & 17, 2 & 18, 3 & 19 and so on inside Logic.

So I went through a few quick trouble shooting steps:

  1. I looked at the direct output routing on the M7CL. Everything was patched one to one just as it should be.  Direct Out 1 was patched to Card Slot 2, Output 1-  Direct Out 2 was patched to Card Slot 2 Output 2. (the Dante Cards were in slots 2 & 3) That was all good.
  2. I looked at the matrix in Dante Controller on the Mac. Again everything was patched beautifully.
  3. I looked at the patching in Logic. 1 to 1, 2 to 2.

In theory, everything should work beautifully. It was time to dig deeper. We fired up Dante Controller on the Mac and took a look at the device info and network status.  This is where things got crazy.

Both of the Dante-MY16-AUD showed up in the device list, but only one had an IP address.  So we unplugged the cards one at a time from the network and each one showed up just fine. Then we plugged the second card in. The network assigned the second card the same IP address. That *might* explain the duplicated audio.

We took a quick look at Yamaha's and Audinate's (Dante's parent company) websites to see what the current firmware versions for the Dante Cards, and software.  We were a few versions behind. So we went through the process of updating Dante Virtual Soundcard the production machine, and my laptop, Dante Controller, and the cards in the M7.

We connected everything back up- aaaannnnd....(insert drum roll here) problem not solved. We were still getting duplicate data. If we manually assigned IP address one card wouldn't show up.  It was 3:30 in the afternoon. I had been at the church since 9:30. I had exhausted all of my options except one. Update the firmware on the M7CL. Unfortunately the church had a big production coming up and didn't want to do that and risk losing all of their scene data. I had to concede defeat and return home.

~Andy

 

2014: A Few Things I'm Excited About

image I pay my bills by working at SE Systems in Greensboro, NC. It's a pretty cool place to work, especially if you're an audio geek. Not only are we a live events production company, and a pro audio, recording and lighting sales company, but we also have an on-site class/presentation room. We are going to be utilizing that room a lot this spring! I'm excited about that.

Rational Acoustics Smaart Training:

On February 25-27th, 2014, Jamie Anderson from Rational Acoustics- makers of Smaart- audio and acoustic measurement software- will be at SE Systems teaching users how to utilize the software. You can register here: http://www.rationalacoustics.com/events/greensboro-nc/

Smaart is amazing software. In short it allows you to "see" the sound. You can look at the frequency response audio systems. You can look at the reverb time in a room. You can look at the phase correlation between two different audio sources. There's really a lot you can do.

Worship Technology Information & Education Series

This spring we're offering a series of classes for all of the volunteer audio folks out there. Sound techs, sound person, techie, sound guy, A/V tech, worship leader- whatever this person is called at the church- this is for the person that wants/needs to know a little bit more.  Here's the basic layout:

Feb 15:

Audio Mixing and Multitracking.

March 15:

Worship Band Monitor Mixing and Personal Monitor Mixer Techniques.

March 22:

Loudspeakers, Wedge Monitors and Open Architecture Signal Processors.

April 19:

Microphone Techniques.

I hope to have more details soon. In the mean time you can keep an eye on www.sesystems.com

~Andy