Log in
Log in

or
Create an account

or
Interview / Podcast
Comment

The Andrew Scheps Interview

Mixing in Parallel (Part 1)

You’d probably be surprised to discover that one of the top mixers in the world works on just a single pair of monitors, in a non-treated room, from a home studio. That engineer is Andrew Scheps, one of the foremost practitioners of the art of parallel processing. The innovative Scheps has a credit list that’s beyond impressive, which includes Adele, Red Hot Chili Peppers, Mettalica, Jay-Z, U2, Michael Jackson, The Rolling Stones, and Justin Timberlake, to name just a few.

View other articles in this series...

Formerly L.A.-based, Scheps now lives in England where he handles a busy mixing schedule from his home in the country. Audiofanzine talked to him there via Skype, in part one of this two-part interview.

Andrew Scheps IMG 5090.JPG
Andrew Scheps

So you mostly mix from a home studio now?

Yeah. I need very, very little. I’ve moved completely into the box, so I just need a pair of speakers and a place to put them. So that’s what I’ve got. I’m going to build myself a little more of a "room” room, at some point, but I’m just setup with the speakers and it’s working really well.

It’s not even an acoustically treated space?

No. My space in L.A. that I had for years and years, even when I was mixing on the Neves was a converted garage, but it was treated like it was just a large living room. For mixing, for control rooms, I just like a dead room that’s big enough to get out of the way. I just want to hear my speakers, I really don’t want to hear the room at all. So for me, any dead room, even if it’s small, can work. I know my speakers well enough that I seem to acclimatize very quickly whenever I go anywhere.

Is your space you have now deadened at all?

There are some curtains. [Laughs]

People who mix in untreated rooms are going to be really happy to hear this. [Laughs]

It’s a room that doesn’t have any problems. I’m lucky. It’s got a slightly curved front wall, which helps. It has a relatively low ceiling, but it’s carpeted. It has some furniture in it that’s padded, and that breaks it up. I think really, reflections are the things that usually kill people.

And what do you use for monitors?

I use these old Tannoys: SRM 10Bs. I’ve used them for years. I had a pair, probably 20 years ago, and now I own four pairs, just in case they blow up.

Passive?

Yeah. They’re from the late '70s.

How big are the woofers?

They have a 10" woofer and a concentric tweeter in the middle.

But you don’t have a sub?

No, but they get a huge amount of low end.

What would you say is unique about your mixing style? Is there a signature aspect to an Andrew Scheps mix?

In a lot of ways, I hope not. Because I don’t like to be someone whose mix you can recognize immediately, because that means it’s not the artist, anymore. I suppose, to be honest, my signature is that my mixes are really loud. [Laughs] To a point where a lot of mastering people don’t like me.

Do you do a lot of 2-bus processing?

Not a huge amount. It changes up, but usually there’s a compressor and at the very end there’s a limiter. But the limiter is set to just shave off the red lights. It’s not doing too much. But there’s a bunch of EQ and some coloration and things like that. There are a lot of things that are doing a little bit, and that adds up to quite a bit. But it’s also because my mixes are built up with tons of parallel compression, and that just brings up the RMS level quite a lot, as opposed to bringing the peaks down. So, they’re just dense, loud mixes.

I interviewed Michael Brauer a little while back, and he also uses a lot of parallel compression. He was sending tracks to compressors through aux sends.

That’s the way I do it, too. It’s by far the easiest way to do it, and especially in Pro Tools, if you have the sends default to being post fader at zero and following the pan, then whatever you decide to send to that compressor, you’re basically just picking up a copy of what’s in the mix bus, and sending that to the compressor. And then you can blend that back in. The thing that I do a lot, that other people might not do quite as much, is to send multiple sources to the same compressor. I don’t have a parallel kick a parallel snare a parallel overhead, I just have a bunch of parallel drum compressors, and I’ll mix and match.

Do the aux tracks with the compressors that you use for the parallel compression also have EQ and other processors in them?

No, usually the chains are actually really simple. It’s usually just a compressor. Every once in a while there will be EQ before or after to take care of some of the artifacts that usually come with it. But the chains tend to be relatively simple. Michael’s chains tend to be relatively complicated, there’s lots of gear on each chain. Whereas I think I probably have more individual chains, but each one of them has many fewer things on it.

Let’s talk about panning. Do you consider yourself to be conservative as a panner, or do you try crazy stuff sometimes? Do you like LCR panning?

Sound Techniques : Andrew Scheps IMG 5006.JPG
Scheps at his previous studio, Punkerpad West, where he mixed with a Neve console.

When I was on the console — I own an old Neve — it was left-center-right. Because otherwise you have to switch in the pan circuit, and it drops the level and your balance changes. Even though there are some resistors, and I’m sure it sounds fine, it always sounded worse to me, because it was quieter. So I avoided the pan pots at all costs. Now that I’m in the box, I’m not nearly as much LCR. With certain stuff I am: Overheads getting hard panned. Doubled rhythm guitars are going to get hard panned — that’s how to make mixes wide, and I like nice wide mixes. But I definitely will try to find places in between, or split something out of the middle just a tiny bit, because that will get it away from the vocal and the snare.

So how would you describe your overall panning strategy?

I like to feel that I know where the band is. I want to know where all the musicians are. So I don’t like it when all of a sudden something comes out of nowhere and it’s like, “Who’s that guy?” Just because I find it distracting, and I’m always trying to find ways to not call your attention to anything, so that you get to the end and you go, “Wow, what a great song.” “Oh how exciting.” I don’t want anyone saying like, “Cool panning, man.”

So it’s kind of like an umpire in baseball, if you’re not noticed, you’re doing a good job.

Exactly.

When you listen to mixes that were done by people in home studios — not pro mixers, but recording musicians — what are the kinds of problems you tend to hear?

I think the first thing, and it’s a bit hard for me to say because my mixes are so loud, is usually they’re just over-smashed. And I think it’s because people are having a hard time getting the excitement they want, and the easiest way to do it is to pop on a limiter or even a multi-tool thing like [iZotope] Ozone, and it can be instantly immediately impressive, but the problem is it will be fatiguing later on. So it’s best not to rely too much on the finishing until you’ve built your own chain. I’ve got a chain that’s more complicated than what goes on in Ozone, but I built it piece by piece and those pieces keep getting swapped out, and I feel like I’ve got a good handle on what every single piece is doing. Because of that, as I’m building a mix, I can immediately recognize like, “Ooh, that thing’s no good for the song, ” and bypass it, and part of my chain goes away. So even if you’re using a tool like Ozone, make sure you know what all the individual pieces are doing and make sure they’re all set to be the best for that particular mix.

What else have you noticed?

I think that when I hear a lot of people’s stuff, even if it starts off sounding great, you get to the chorus and nothing happens. And it’s not about the mix, it’s about the dynamics of the song. And I’m not saying there’s a problem with the arrangement, but people manage to get the verse to sound so great and so huge, that they forget that they actually need to leave some room for the chorus. It could be as simple as turning the chorus up in relation to the verse, but it might be that you shouldn’t have the drums sound so big in the verse, because that way, when you bring it in like that in the chorus, it won’t feel like a big shift. The chorus will just be bigger, because now the drums keep up with the guitars instead of getting left behind, or something like that. And that’s the major thing that I hear, it’s sort of a lack of musicality. The plug-ins are available to everybody, and people are getting really good at using them, and there are so many great tutorials online, that I think that people are really pretty competent. I think in a lot of ways, they just use too many tools, and they overuse them. Just use a little bit less, and they’ll work better.

Do you sometimes take out some tracks on the verse so that you can put them back in the chorus for contrast?

Yeah, it can be a musical arrangement thing. But what I was talking about was with mixing. Let’s say you’ve got two parallel compressors on the drums, and one of them is kind of dirty, and that’s really making them tough — maybe only use that one in the chorus. You don’t necessarily need it in the verse. If you do need it in the verse, well then you might need another one in the chorus. But you have to build the dynamics back into the song, because once you put all your hundreds of plug-ins on something, they’re all taking away a little bit of what might have been in the performance. You’ve got to add it back. Also, just automate stuff. Do rides. Push the downbeat of every chorus for every instrument. You create this kind of pop at the downbeat, and then it can all come back down or stay up or whatever you want.

Do you do that automation on the master track?

You can do it on the master, but I tend to ride groups of instruments. Like all the rhythm guitars up at the downbeat of a chorus, because it sounds like they leaned in a played a little louder, even though the tone is so distorted that they couldn’t get louder or quieter if they tried. But you push it up a little bit, and all of a sudden there’s some of that excitement back.

You just collaborated with Waves on a new plug-in, your second with them. Tell us about it.

Sound Techniques : Waves Scheps Parallel Particles
Parallel Particles, Scheps’ new Waves plug-in lets you easily add parallel processing to your tracks

It’s called Scheps Parallel Particles. Basically, it’s four knobs and that’s it. There are a couple of buttons, but it’s just four knobs. And it’s four distinct parallel processes, that are designed to make your stuff just sound better. So, it’s not like the One-Knob series, or even like the Greg Wells stuff, which is a very simple interface but with tons of things going on underneath. These are four relatively simple chains, but they split into parallel, and then they sort of meet up again to get paralleled again with the next things, so that the routing inside is a little bit complicated. But basically, it just gives you four knobs to do four totally different things.

Like what?

There are two that are sort of harmonic synthesis things, one called Air and one called Sub — so just a top-end thing and a bottom-end thing to help you create stuff that isn’t necessarily there. And then there are two chains called Bite and Thick, which are two totally different parallel compression chains. So you just have a knob for each. And your audio is always passing through, and you just start to blend these four processes in, and they’re not interdependent at all. The way the plug-in is setup, you can have any one of the four or all four, or whatever. You don’t have to remember, “Oh, I have to turn this one up to get that one, ” to do stuff. So the point of it is, rather than even go into what the chains are, it’s just four different characteristics of the sound that aren’t even that well defined, and it’s supposed to make you do absolutely nothing but listen.

What’s a typical scenario for using the plug-in?

You don’t go to this plug-in because, “I need to do a thing.” You just go to the plug-in because you’ve got something that sounds a little limp or, I don’t know, you just want to see what you can do to it, so you start messing with it. And I found I tend to blend stuff in and I’m like “It’s not making that much of a difference, ” but then I bypass it, and it’s like “Holy Shit!” It’s all subtle, but it really adds up to making it sound like you were in a better room with a better microphone and a better preamp, and you made some better choices about compression. It’s just everything all at once, and the way you can blend it.

Talk about the GUI.

The interface is super cool. It looks sort of like a particle accelerator, animated with some color. But it’s really just about making something feel different, but not having to worry, “Am I compressing or am I EQing, or am I synthesizing?" It doesn’t matter. And as strange as it might look at first, there’s actually a lot of visual feedback about how your sound is being processed. Once you get used to it, I think it’s a really helpful, intuitive interface. We really tried hard to make it something that you could get information from without ever having to be detailed about what the four processes are. 

Let’s talk about monitoring. Do you do anything to change up the way you’re hearing things during a mix session? Like I was recently talking to someone who said sometimes he’ll stand up so that the tweeters are hitting him at a different angle. And some people go to the back of the room and listen.

You need to change things up a little bit, because otherwise you stop hearing what’s going on. But I do it just by changing level. I listen really loud for a bit, then listen really quietly. Or I just turn around in my chair — turn sideways. Because at some point you’ve got to stop worrying about the little things and hear the big picture. The other thing for me, that does it more than anything to do with monitoring itself, is that since I started working in the box, I’m never mixing one song, I’m always mixing lots of songs. And I will sometimes only have a song open for 10 minutes. I’ll open it up, hit play, I know the stuff I’ve got to do, but I don’t feel like doing it right now. Or, it’s not done, but I really can’t figure out what’s next so I just close it, and open up another song, and then I’m completely fresh on the one I just opened. And maybe I’ll dig into that for four hours, or maybe I’ll be on that for 20 minutes. Then eventually, I open it up and say, “Alright, I guess this one’s done.” And then it’s time to send it.

So that’s how you avoid losing your perspective, and the ear-fatigue thing that happens so much to mixers.

Yeah. Like today, I’m cycling five songs from two different projects.

It sounds like, from what you were saying, that you just realize at some point that a mix is done. You don’t have a specific workflow thing like, “I leave it over night and then come back to it the next day” or something like that.

I almost never finish a mix the same day that I start it, but it’s also because I’m working on five mixes, let’s say. So over the course of three days, I’ll work on all five of those. And then three of them will all of a sudden be done. And the cool part about it, too, is that I’m not really aware of how far into it I am. I kind of get tired of working on it, and it’s a split-second decision: “OK that’s it, close it. Move on to the next one.” I’m not keeping lists of things or whatever. So sometimes I open it up and I think like “I’ve barely even started this one.” Or, ”Oh, this one’s actually almost done. Let me just sort out the vocal, " or whatever is left. It’s a bizarre way of working, but it works.

That sounds like a good way of working. Because it’s so easy to get off track on a mix after you’ve been working on it for hours and hours. For those of us that are not at your level, we work on a mix for a long time and then —I know with me at least — listen the next day and go, “What was I thinking with that hi-hat?” Or something like that.

I do it too. I get notes back. I’ll send a mix and get a note back like “Hey, so, with the hi-hat, was that on purpose or is something broken?” And then I listen and go "Oh my god, I can’t believe I sent you that. So sorry, I’ll send you a new one.” It happens.

[End of part 1]

Next article in this series:
Andrew Scheps talks compression, plug-ins, panning, EQ and more →

Would you like to comment this article?

Log in
Become a member