Log in
Log in

or
Create an account

or
Learning
Comment

Talking technique with Tony Maserati - Mixing in the Fast Lane

Grammy-winning engineer and producer Tony Maserati has worked with countless high-profile artists including Beyoncé, Lady Gaga, Alicia Keys, John Legend, Jay-Z, and Jennifer Lopez, to name just a few.

Talking technique with Tony Maserati: Mixing in the Fast Lane

He got his initial training at Berklee School of Music, where he studied production and engineering. These days, he’s known for his pop and hip-hop mixes, and the songs he’s mixed have sold over 100 million units worldwide.

Maserati ha also been involved on the software-development side of music production, collaborating with Waves on the Tony Maserati Signature Series Plug-in Bundle, which features seven plug-ins that emulate his processor chains for a variety of mixing tasks. He has a hybrid digital/analog system in his studio, which allows him to access analog processors from Pro Tools and utilize analog summing.

Maserati spoke with Audiofanzine by phone recently and was kind enough to share some of his mixing techniques with us.

What percentage of your work would you say is producing and what is strictly mixing?

Sound Techniques : Photo by Brian A Petersen
Tony Maserati (Photo by Brian A Petersen)

I would say about 90 percent of the work that I do is strictly mixing.

But you do some production?

I do. These days I usually do co-productions. Keeping up with the technology of programming and software necessary to keep that whole process going is a little more than I really have time for, so I bring in the younger team members as co-producers.

Right. When you’re on a project as a producer do you usually mix the project, too, or do you like to keep those roles separate?

Normally I like to keep them separate. But the last project I did, I hired a younger team member to engineer, and he did a great job of setting it up and getting it ready for me to mix. So the mix was pretty simple. Generally, I will hire an engineer to do the recording and the overdubbing and then I tend to mix the projects.

I’ve heard you talk about how you often use different processor settings between verses and choruses in your mixes.

I think it’s necessary to create a lift in the chorus. If the singer is squeezing harder on his or her pipes in the chorus, I’ll alter the EQ based on that. I’ll also automate an EQ change for the synths, keyboards and occasionally the drums. I might just push a little top in the choruses so you get that feeling of a lift.

Can you give some examples of songs where you used that technique?

Any Nick Jonas song that I’ve done recently; “Find You” is the latest one. Or any of those big pop mixes. Shawn Mendez’s “Stitches” and “Mercy” are other examples—those kinds of records. Demi Lovato. That level of pop is always going to get that push in the top end. And there are some that don’t get that. I’m working on a new artist on Republic, Joseph Angel, who’s more of R&B-and-soul kind of guy. I wouldn’t necessarily use that technique with him, although his vocal might need an EQ change because he pushes harder in the choruses. I’ll just immediately duplicate his vocal and have a slightly different EQ and compression system for that.

Notice how the processing changes between song sections

So you would have it on a separate track and then just mute the verse part or the chorus one or vice versa?

Yes, I’d mute the other part.

Rather than having to automate the settings on one track, you just have two tracks with separate settings.

Correct. I’ll have the audio tracks separated, and then I’ll have a lead vocal aux master for him. I’ll do rides, individual EQing, and a small amount of compression on the audio track. And then I’ll send it to an aux track which will have similar effects, and maybe a bit of limiting to really make sure that he still sounds like the same person from one section to the next.

When I was going through your Facebook page, I found a link to another song, “Dear World” by Echosmith, which is very different from the pop stuff but had a wonderful vocal sound and reverb treatment. Do you remember what you did, settings wise, on that one?

I’ve got several plug-ins as well as a de-esser, on her voice. I’m automating FabFilter Pro Q2. I’ve actually got a GML EQ on her as well, a digital one. I’ve got a FabFilter Pro MB and I’m also using an analog ITI EQ. Effects wise, I’ve got a little EP-34 Tape Echo from UAD that I used to create a bouncy stereo effect. I’m also using Altiverb, and a Lexicon 480L simulation, the Relab LX-480. I’ve also got a UAD AMS RMX-16 reverb plug-in on her.

Do you often use multiple reverbs on one voice?

I do. I tend to use a variety of reverbs to add that layered richness. I don’t know how to describe it other than it gives it more depth. I might use one reverb, in this case the AMS, for sort of a darker reverb sound. And that would cover some of the distant sound that I might want. And then the LX-480 is a shorter, thinner plate-type setting. It adds a little bit of airy brightness to a voice. I’m also using a Chandler Curve Bender EQ. I summed it completely analog. I have a hybrid system so I can bounce between internal and external summing quite easily, and my system allows me to have 32 hardware inserts as well as 32 channels of analog summing.

Maserati layered multiple reverbs to get the lush lead vocal sound

What kind of summing?

I’ve got an old Neve sidecar from the 70s and I’ve got the Chandler mini-mixer as well. And then, as far as EQ and compression, I’m using the Chandler Curve Bender and a Shadow Hills Mastering compressor. And then I bring that all back through a Lavry A/D converter. I also have a Black Lion converter. I go back and forth between those.

You’ve said in the past that you prefer subtractive EQ. Is that pretty much a general rule for you or is it just for getting a certain kind of sound?

Always. I was trained that way. So, it goes all the way back to my education at Berklee School of Music. One of my first teachers talked about how you can gain headroom by removing the things that you don’t like as opposed to only focusing on adding. So, I was working on a mix yesterday for an Australian band called Dream. Their rough mix was in good shape, but it was a bit muddy. I just spent probably an hour just removing some of the frequencies that were causing too much mud and getting in the way. And not only did that give me a bit more bounce, because I got more headroom and depth, it just was clearer overall.

If you had an acoustic guitar sound that was kind of dull, wouldn’t you want to go in and push it a little bit in the upper mids to get a little more brightness?

Well, the first thing I would do is probably pick the places that I wanted to reduce. And generally, I’ll use one kind of EQ for reduction and a different EQ for boosting.

Interesting.

Most of the time, I’m using a fully parametric and sometimes automatable EQ for my subtractive equalization, because quite often in the case of a vocal, as the singer moves through the notes—or in my world, the frequencies—I may automate some of that. If somebody is low in their range and they’re just welling up in the lower frequencies, I’ll move around the frequencies that are problematic by automating that decrement. And that still allows me to have a separate EQ to push, and different sound, so I might go analog for the pushing, but I’ll use a digital parametric that’s automatable for my subtractive.

So you’re not exclusively using subtractive it’s just the first thing you go to?

Exactly. Quite often I can accomplish everything I need just by subtracting and then bringing level up on that particular instrument, but quite often, especially in pop music, I’m going to need some top end. I might get some of that top end from my bus EQ. And some of that I’ll get from individual audio tracks. I’ll either go outboard, or I’ll use a different character EQ for my boosting. Lately, I’ve been using Softube Tube-Tech PE EQ for my boosting, but I’ll bounce around. I’ll use the GML; I’ll use the Chandler, too.

I guess you probably do a lot of high-pass filtering too as part of your subtraction of frequencies?

There are obvious things that need some high pass. Hi-hats don’t need to have any low frequencies. And certainly, basses and kicks. I do need to make sure that I’m focused on where I want to control the low frequency. I’ll do a high pass on that, and I might set it at –6db per octave or
-12.

Would you high-pass guitars? They tend to have a lot of boominess at the bottom that you don’t need.

Quite often I’ll use either a Waves C4 or a Fabfilter Pro-MB. I’ll use those things on vocals, keyboards and guitars. If it’s welling up around 200Hz on only those notes, I’ll just remove it from those spots and set the threshold, so it grabs it where I want it to grab. I don’t like hearing EQ when I’m listening to a record. I don’t really want to hear a seriously EQ’d record.

Let’s talk about parallel compression. When would you use it, as opposed to just inserting a compressor on a track or channel? 

It depends on how the phase works out and on the plug-ins and inserts that I’m using. Let’s say all I’ve done on the audio track is a bit of subtractive equalization and some de-essing. Then I’ll send that track to my lead vocal aux master, and I do all my business on there: effects, some more EQ, some limiting, etc. I can send from that initial audio track—we’ll call it the raw audio track. I can send to my parallel compressor and do much more drastic compression. If I’m looking to increase the energy of the vocal, I might put an 1176, either analog or digital, with all buttons in. And that makes it jump out, and I can ride that. I quite often will ride that track for the moments that I want to bring that vocal closer to the listener, and then I’ll ride it down for the moments when I want it a little further away from the listener.

So you use the compression for moving elements front-to-back in the mix?

I use parallel compression for that.

Okay. Do you also use ambience for that?

Sure. We’ll often set the size of the room with delays, and then depending on how I want to EQ those delays, the room is either dark or bright. I’ll set the size of the room with those for particular sections. Every mix and every section is different. I’m constantly using all the tools that I have to control that space and that position of the vocal. I just did a mix for Beyoncé and Ed Sheeran for a song called “Perfect.” That was one of those mixes where I had to really work hard to find the space that the vocalists were going to sit in, depending on what section and sometimes what phrases. That’s because they were singing together on some of the phrases and on others they were singing alone. Some of the phrases were carried by a really strong and powerful interpretation of the lyric and some of them were more intimate and styled, so how those sections move helps the song feel more captivation to the listener. Of course, doing that I’ve got to make sure it never sounds faked, in every way it’s got to sound natural.

Maserati changed room sizes on the ambience between song sections

Since you’re working on a lot of projects as the mix engineer rather than the producer, what happens if there’s a song and it just doesn’t have enough going on in the arrangement to make it exciting and make it build. Do you go to the producer and suggest things you can do in the mix to help?

Yeah. It does happen and quite often. I’m working currently with a young act. Two sisters named Chloe and Halle who are signed to Parkwood, Beyoncé's label. They’ve produced or co-produced every song and recorded most of the vocals themselves. They’ve programmed, or co-wrote, and they’re only 19 and 17 years old. They are damn near musical geniuses already. But that being said, this is their first record. And they don’t have experience making records or making records do a particular thing in a marketplace. They’ll bring me a song, and I’ll start mixing it, and I might send a note saying, “Hey we need a little lift in the outtro chorus. Could you come up with something?” I never try to tell them what it should be. I just tell them we need something, and let them determine what it is.

Right.

Or we need something to make a transition better. I’ll ask them if they can come up with something, whether it’s a drum fill or a vocal part or whatever. And they’re just as open as I am, so they’ll just say, “Sure, okay we’ll give it a try.” As long as the label is cool with it, it’s fine. I’m very transparent. I’ll bring it up with everyone and make sure that everyone gets on the same page because that’s my job. I just did something for an artist and I wrote her a note pretty early on and said, “It seems you’ve got some tuning issues here and there, do you want me to take care of that?” And she was like “No, I really like it. I want it to be raw. I want it to sound like I did this in my bedroom.” And that’s the answer that I needed.

You have a bundle of plug-ins that you collaborated on with Waves, The Tony Maserati Signature Series. How did that come about?

It started out where I just reached out to them, because I had been using, at that time, the new API plug-in they had created. I said, “Look, I love these things. They’re really helping me out and I’d like to do some presets.” So, that’s how the relationship started. And soon after we did some stuff with the release of the SSL bundle and we got to talking about doing some plug-ins for me. They asked me what kind of plug-ins I would create and I was very frank with them because I didn’t really feel there was a need to recreate an 1176 or the classics. I didn’t feel like that was interesting.

Waves Maserati ACG plug-in
ACG is one of 7 plug-ins in the Waves Tony Maserati Signature Series
 

What did you tell them that you wanted to do?

I said what I’d like to do is recreate some of the chains of analog and digital processing that I use, and I’d like to try to recreate them as closely as possible, and that’s how it all started with the Signature Series. It required them to engineer basically a sum of things behind the GUI that included reverb and parallel compression and various EQ’s going in different directions and automatable things. If you’ve seen my plug-ins they have modes for different vocal sounds for different performance, and then modes for different kinds of basses or types of acoustic guitars—sizes of guitars. We made the acoustic guitar plug-in so that it has a setting for a D-size [dreadnaught] guitar and one setting for a smaller one, settings for different grades of strings and things like that. I was always trying to create ways of compensating for those things and in the same way that I do with my chains. That’s really what I tried to accomplish and it took us eight months to do that.

That’s a lot of work.

It was a long process. And they were just starting out at that point with the Signature Series concept.

Was yours the first Signature Series?

It was. Mine was the first release and they used that format all across the Signature Series line. That sort of summed network behind the GUI is happening in all the Signature Series bundles, and we created that together. There was a lot of trial and error in the process of developing it. I would send them audio samples and the exact settings of the equipment I was using. I would do a Q-Clone through the equipment so that they could actually see the waveform. They would then send it back to me, after their tests and suggest some options. It was a long process and was very, very collaborative. It was great and I love it. I’m really happy about it, and there’s actually a new thing that I’m interested in creating with them as well, and we’ll see if that’s going to come into fruition.

Thanks, Tony!

You’re welcome!

Would you like to comment this article?

Log in
Become a member