For bands on the alternative metal scene who have an album to record, Joey Sturgis is the closest thing to a one-stop shop. He’s a producer, songwriter, guitarist, as well as a tracking, mixing and mastering engineer. Sturgis has worked his magic on Asking Alexandria’s million-selling album "Stand Up and Scream," as well as for groups like We Came as Romans, Of Mice & Men, and many others.
But that’s not all. The multi-talented Sturgis is also a plug-in designer, and whose company, Joey Sturgis Tones, puts out a compressor called Gain Reduction that Sturgis created based on his own mixing experience – and uses on all his productions — as well as numerous other production plug-ins.
On Wednesday February 4th and Thursday Feburary 5th, Sturgis will be the focus of a free recording workshop, “Studio Pass: Joey Sturgis, ” which can be viewed live on CreativeLive.com from 2PM to 9PM EST. Click here for more info. “You’re going to have cameras on me for two days, eight hours a day. I’m going to be sitting in a studio and I’m going to be telling you from start to finish, how I approach the song, how to record the guitars, how to edit them, how to work with the artist, and how to mix it, ” says Sturgis. “I’m basically breaking down the entire process.”
In the following interview, Sturgis talks about his studio, his unusual but effective production and mastering techniques and the plug-ins he creates.
So where is your studio located?
I’m in Manchester, Michigan. We’ve got a log cabin out in the country.
We have a bunch of land — it’s very private. We’ve got the whole setup in the basement, and a setup upstairs where I do all my mixing and post-production. And then downstairs is the main tracking room where we do guitars, bass and vocals, and the band lives down there.
Do you work on Pro Tools?
I work on a Windows PC with [Steinberg] Cubase. I’ve been using it my entire career. I bought Pro Tools at one point when I thought I was going to be working with a bunch of people that used it…So I got it, learned it, and used it for like three months. At the end of it I realized this wasn’t going to work for me. As much as I needed to try to figure it out, it was crashing, it wouldn’t work, it was slow. And I had a decent system, I bought an HD3, with a crazy Mac. I spent like $4,000 on a Mac. It just wasn’t for me.
So what did you do?
I just took the loss, and went back to Cubase. I was actually working on a project when all this happened so we recorded everything and we got to the point where we were doing keyboards and everytime I tried to open a session it would crash, or certain keyboard parts would get deleted or wouldn’t work. Or I’d have to reload the instrument and figure out which setting I used and how did I turn that knob, and I couldn’t remember. And I had to re-instantiate it. So I took everything else that was audio, and put it into Cubase and recreated the entire record from scratch.
Wow [Laughs]
So yeah. I’m on Cubase. And I’ve got two engineers plus me, so we’ve got three people who are attacking the song at all times. We split all of our projects into three sessions, so one guy could be editing while another guy’s recording guitar while another guy’s recording vocals.
What audio interface do you use?
An RME FireFace 800. I’ve been using that for a pretty long time, because it just never has any problems. It sounds great and it works great. And I’ve got some APIs and I’ve got a Great River, which is a Neve clone, and that’s what I use for my pres. If we ever have to do real drums, we’d just go to a studio and we’d use whatever they have there. There’s a local studio around here called Pearl Sound that I like to go to, that has a Neve board and a bunch of outboard gear.
So talk about your production style.
We record records in a very interesting way. Not the typical, classic way because I feel like I’ve come across a lot of different problems with it. Everyone typically starts with drums and then after you’ve recorded the drums, you can’t change them. So if you get all the way to the end and you’re recording this melody line, and you thought of a really cool timing for one of the lines and it clashes with what the drums are doing, it’s such a pain in the ass to go fix the drums. You have to either go re-record, punch it in, or do a whole bunch of really weird edits to make it work. So we actually do the whole thing backwards.
So you record the drums late in the process instead of early?
We start with vocals and we record the vocals over demo stems that the band has already made, because everybody records on a laptop now. And then, from there, I take all that stuff that they recorded, and I manipulate it until I get to a point where I feel the song is improving, and I’m tweaking it. I’m cutting little parts. Sometimes I’m punching in parts with my guitar. Whatever it takes to make the song really awesome. Then we go in and shift drums around. We’ve got like a programmed drum in there so we can hear “what will this fill sound like, what will that fill sound like.” And once we’ve tried all that stuff and we have the song kind of where we want it, then we start deleting the guitars and re-recording them to make them sound better and punching in the bass. And if the vocalist can sing the chorus better next week, we’ll give it a shot and see if it sounds better that time. And if not, then we’ll keep the old one. The last step of the whole process, really, is to just punch in the drummer over top of the final production, or at the very least, recording the drummer to the pre-production stems that we tweaked. If the drummer takes longer to record, we will start him simultaneously while working with vocals and guitars in different rooms. I like to attack the album from multiple fronts, all at the same time. It never becomes boring that way.
Does everyone work to a click?
Yeah, of course. Everything’s to a click. This wouldn’t work for all genres of music, but this is all for the kind of stuff that I work on, which is the more Alternative, or like the Metalcore and the Hardcore scene. It kind of works because everything is pretty, precision based.
I was listening to some of your stuff. The rhythm guitars sound very big. Can you talk about the type of layering that you do and how you get those sounds.
It all comes down to the simple fact of just using your ears and making sure that everything that goes into that recording is absolutely 100 percent perfect for every single note that’s being played. And that just takes a lot of attention to detail, a lot of patience, and a lot of good listening techniques: Knowing when something’s flat, when something’s sharp, and not accepting it. We spend a lot of time making sure the guitar performs properly. Sometimes we have to teach people, “There’s a way you play onstage and a way you play your guitar when you sitting down, but there’s a certain way that you’ve got to play when you’re in the studio in order to get those chords and those parts to sound in harmony with each other.”
I assume that most of the parts are at least doubled, if not more than that?
Typically, I’ll have one track on the left side and one track on the right side. Most of the time they’re playing the same thing, but sometimes one side will start to break out into harmony while the other continues doing the same thing. Or sometimes, if you have a good writer, they’ll be writing counter rhythms and counter melodies on each side and will play off of that and make it cool. I wouldnt record two guitars playing the same thing on each side — and I know there’s a lot of people that do that, and they call it “quad tracking.” I don’t like quad tracking because you can’t get your guitars as focused or as loud as you could if you just do two, because when you do four, now you’ve got so much extra content frequency wise, that you can’t really push them as loud as you want into the mix, because there’s so much more density to it.
I was interested looking at your credits that you not only do engineering and mixing, but mastering as well. That’s pretty unusual.
Yeah, I do the mastering and the mixing at the same time. So I don’t really know what my song would sound like unmastered. I always have it on. I have it setup to where there’s kind of like a startup chain that I pretty much use all the time, and then as I’m working on the song, if I’ve got too much low end overall, I can reach into my mastering settings and make that little change. But also, when I’m adjusting the mix settings, it’s what you hear is what you get, so I don’t have to wonder if when it goes through mastering, will it stay at the same volume. I don’t have those guessing games. I’m mixing through my mastering chain, so what I hear is exactly what I’m going to get.
So what do you use for a mastering chain? Is it software or hardware based?
I’m all software based. That’s what’s unique about me a little bit, it’s 100-percent in the box. I use very little outboard gear.
Do you use iZotope Ozone?
I use Ozone and a combination of different Waves plug-ins. I’ve got the Waves Horizon Bundle which is a lot of their really good plug-ins, and I’ve got Studio Classics with comes with the SSL and API models and the Vintech and all that stuff.
So you have a basic mastering setting that you’re monintoring through as you’re mixing, so you’re hearing it as it’s going to sound?
Yeah. Once you get your gain staging down, you’re pretty much going to end up with the same volume with a mix and the same amount of headroom everytime. So I load the preset kind of thing. It’s not a preset, it’s like a starting point. Honestly, the starting point doesn’t really do much, just loads the plug-ins in the right order that I want them to be in, and then that’s it. So, they all go into the limiter and I’ll bring down the threshold, right, and that’s the first thing I do. So now the song’s pretty loud. Now I’m like, "OK, I can see that now that the song is loud, my bass is jumping out and like swallowing stuff. So then I have the option of going in and messing with the bass, or going back into the compressor settings, or putting in a multiband compressor. It’s just like an organic kind of process of molding this clay into something.
I definitely see the point, although there is a good counterargument that if you’ve been really involved in the project all the way through, that when you get to the mastering stage you may completely have no perspective. And also, a mastering engineer theoretically has a better monitoring setup, because it’s dedicated for mastering and they can really hear everything…
I get that, and I think that makes sense for a lot of different kinds of music. But for the kind of music I work on, it seems like everything is more centered around having one guy that can handle it all.
Do you do pretty heavy limiting, and are there “loudness wars” in metal, as well?
Oh, of course. The biggest problem with metal in terms of technicality is that basically everything is a wall of sound so like the scream is pretty much a brickwall square wave, and the guitar is a brickwall square wave, and then your drums, are getting crushed, in order to punch through all that crap, and turned into almost a square wave, too. So you’ve got basically full on wall of sound, and everything is fighting for a place over top of each other. And then you’re mastering that, and pretty much removing the dynamics of that, the combination of those things and coming out with pretty much a block of audio. But that’s the game, man! That’s what people want. You’ll know, because you’ll send it to your client, and if it’s quieter than the others, and has a little more dynamics, then they’ll come back and say, “It’s not loud enough.” So it’s what it is, and at the end of the day, the person is hiring you to do something, and they’re paying you money, so you’ve got to make them happy. Much as I’d love to end the loudness wars, it’s just not realistic.
So tell us about your plug-in company, Joey Sturgis Tones.
We make audio plug-ins for musicians and producers. We’re kind of the crossover company for people who aren’t necessarily technically inclined, or don’t have any background in recording. So we create products to make their life easier. But also, they’re really good for people who do this for a profession. It’s not like we’re trying to sell the Easy Bake Oven to hardcore dudes. We’re creating products that crossover between those two. And the next one that’s coming out is called Tone Forge Menace, and Menace is like the model name, and Tone Forge is like the platform. There’s going to be a whole bunch of guitar plug-ins that come out under the Tone Forge name. And the first one that comes out is called Menace.
What do you have on the market now?
Gain Reduction is our biggest product and our first product that we put out. It’s a vocal compressor that I designed and it kind of hides all the math and the mumbo jumbo that you get tied up in with other compressors. It’s kind of got like a secret sauce built into it that I designed over a long period of time. I’ve tested it on tons and tons of actual music and songs. I worked with vocalists in the studio over a long period of time, testing out those algorithms and things, and that’s the result. It’s done very well. And we’re actually going to be putting out Gain Reduction Deluxe in the next month or two.
I see that Gain Reduction is only $49. That’s very inexpensive for that type of plug-in.
I feel like we’re trying to come in at a really good price point. We want to make it easy for the guy who only has GarageBand, and wants to show people his songs that he’s writing on his laptop, and he wants it to sound good. I think he can go with us and get there more cost effectively. I’m also using Gain Reduction on all my work. I’m putting out albums that are selling a lot. So I feel like we’re going from the bottom of the barrel to the top.
Do you code your plug-ins yourself?
I did program Gain Reduction myself, and then I brought it to other programmers to make it compatible with Mac and to make it work for Pro Tools etc., because I work with Windows and Cubase. But now we’re a lot bigger, I don’t do it myself anymore.
Did you have a programming background, or did you just teach yourself?
When I was in high school, I did a whole bunch of programming on the side. I went to vocational school for it, but they didn’t really teach us much. So it was just kind of like, if you’re into this, here’s a class where you can play around, but in reality, you’re going to have go home and put some more energy into it. That’s what I did. I made my own little video games and experimented with creating my own Windows applications to make my life easier, like batch renaming files and stuff like that. And then I got into music production and kind of forgot about all that stuff for a number of years, but then when I came back into playing around with computer science type things, my friends were like “You used to make your own plug-ins, I bet you can do it.” I was like “No, it’s going to be too hard. Making something compatible with all those different formats and different computers would be a nightmare.” Until one day I was like, “You know, I’m going to try and figure it out.” And that was the start of everything.
[Opening photo by Michael Palaez]