David Tolomei is one of the top mix engineers and producers on the indie and underground scenes. He cut his teeth as an assistant at New York's Avatar studios, which, at the time, was one of the most prestigious commercial facilities in the country. He later opened a studio in NYC's East Village, where he recorded, produced, and mixed client projects for almost eight years. These days he mainly works at his own studio in Los Angeles. Tolomei’s client list includes artists such as Dirty Projectors, Beach House, Future Islands, John Cale, Miya Folick, PIXX, Torres and Half Waif.
Although his large-studio background gave him quite a bit of experience with outboard gear, he now mixes almost entirely in the box, and is known for his prodigious use of processing & effects on his projects. Audiofanzine talked to him recently about that and much more.
Most of your work is with indie and underground artists, right?
Yeah. I mean, that’s where I’m at, and a lot of what I listen to. And I think that moving forward, there’s a lot more potential in the industry for some of these things to crossover. Certain things like Half Waif, I feel can fall into that territory. They’re not pop to the point of not having the respect of people that are looking for something unique and challenging. It’s really original music. But at the same time, I don’t feel that it falls into a particular age group or demographic. I think a lot of people are really going to enjoy it.
You did the recent Dirty Projectors album?
The previous album that came out, the self-titled album from this year, and that’s one of those situations where Dave Longstreth [Dirty Projector front person] is extremely hands on. He’s an absolute genius and has a very clear idea of what he wants. He’s pretty hands-on during the mix stage. We spent a couple of weeks mixing, and he had his hands in it quite a bit.
You’re currently mixing Half Waif’s album?
Are you technically the producer on that project, or just mixing?
I feel like that’s becoming more of a complicated question as time goes on. I feel like these days, projects are happening more and more often, as kind of an amalgam of different roles. What ended up happening is the band worked quite a bit on their own, doing what I consider electronic production, probably months of writing and putting things together in Ableton. And then when we got in the studio I wore the producer hat because I have the studio experience. And we collaborated on it from that point on. So I would consider it a co-production type role. And then when it comes to mixing I tend to have a lot of ideas. Mixing is probably the arena where I feel most comfortable & creative. I really enjoy how far you can take the track in that stage with today’s technology. I learned from a lot of old-school guys that in the '60s and '70s, your job was to not mess it up. You’d balance things, pull up a plate, do some rides, & you were done. Mixing wasn’t at all what it is today.
How do you make your mixes memorable? What kinds of things do you do that are unusual?
I’m a believer that while times change and technology changes, if you aren’t making the most of those changes, then you’ll have a habit of falling behind. So, mixing a lot in the box, and the speed of computers today, has allowed us to be able to automate absolutely everything to our heart’s content. And so I have a mixing style where I like for there to be a lot of contrast within a song, between changes in the arrangement.
Do you mean changes from one song section to the next?
Exactly. Depending on the genre, if it works well for the composition of the track, I often have an incredible amount of automation changing between parts. The vocal sound completely changes, the compression changes, the EQ on the drums changes, the reverb changes. Everything changes. Sometimes it’s gradual, sometimes it’s abrupt. When I used to work on consoles, that was very difficult to do and limited. You could certainly turn a reverb up or down, but even that’s limited depending on the console. I worked on a G Series for seven or eight years, and you can’t automate the sends, you can only automate the returns, so to automate a send you’d have to duplicate your track to another channel and devote that large fader just as a send, but then you’re eating up another whole channel on the console. These days, in the box, it’s so easy and so much faster. I ended up deciding, instead of doing quicker mixes, why not devote that new time to doing more complex mixes. Why not still spend the same amount of time, but see if I can take the song to a different place. I’m finding that more and more, people are starting to expect that from me, and I really enjoy that challenge. It makes every mix different. You can’t really just dial in a drum sound and be like, “OK, drums are done.” Then if you change the vocal the drums don’t work, you’ve got to change the drums, and before you know it, you’re essentially doing like five mixes within every mix.
Because you’re doing so much changing from section to section?
And what about the transitions between sections? I’ve talked to mixers who are very focused on those.
Yes. And also in those transitions, you’re making the judgment about whether or not you want that to be a big moment that calls attention to the listener, or whether or not you want it to just kind of wash over you. I think that’s part of where my sensibilities as a producer come into play as a mixer. Often, I get these projects that were self-produced, and I’m like, “Wow, the basic melody is a hit song but when the chorus happens it’s just kind of an 'is this it?' situation. And I feel like now in the mix stage, if you’ve got enough time and energy, you can do a lot of the same stuff that you would otherwise do if you were tracking a band, and were like, ”Alright, we’ve got to make this a big moment. We’re gonna quadruple track the guitars, and we’re going to change up the drum sound for the chorus, ” and so forth. I feel like now, in the mix stage, if you’re willing to put the time in, you can take things very far.
Do you work in Pro Tools?
Yes, I work in Pro Tools. Right now, I’m in the box. There are times when I use a hybrid setup. That’s certainly my preferred method.
Your hybrid setup includes a summing amp?
Yeah, like outboard gear and summing amp, still doing a lot of the heavy lifting in the box. I definitely work faster completely in the box, but it’s really nice to have the option of using equipment. Certain sounds are harder to get in the box. I wouldn’t say impossible, anymore. There’s certain gear where you just patch it in, and you’re like, “There it is.” Whereas in the box it’s like, “Let me run these eight plug-ins and tweak them all to make them sound a little bit like this analog box, particularly tubes.”
What do you think about the emulation plug-ins? Since you started your career working at a big studio like Avatar, you’re probably familiar with the original hardware that’s now getting modeled.
Yeah. I’ve been around a lot of original hardware, and own quite a bit of some of that original hardware, and my take is that where plug-ins are now, the emulations are missing the magic of some of that equipment. A great example would be a Neve 31102 plug-in. It’s going to have a very similar curve. It will have the same frequency points, but there’s a certain magic about going through those transformers that I don’t think you get out of the plug-in. Honestly, I think for a lot of these people, some of it is just to make things interesting. You get tired of looking at the same plug-ins, and it’s fun to have all these different, nice-looking pictures. But, for me, working in the box, I use a lot of surgical EQs, and I don’t find myself using EQs to color things, because there’s already a whole bundle of plug-ins for dealing with harmonic distortion and modulation. So I find myself compartmentalizing and being a little more technical with it, where I’m like, “OK, stage one: I’m going to EQ this thing to be exactly what I want it to be, but I’ll use a surgical, sterile EQ.” If I want to color it, that’s the next step. That’s something else that I’ll do with a different plug-in. I like the coloration that I get from those emulators, especially when I first try them out. But then like in a blind test with my mixes prior to getting that plug-in, I don’t really notice much difference. With hardware, if you buy yourself like an AMS DMX delay or a piece of hardware that sounds very specific, you can literally go back in your catalog and be like, “Oh, that’s the day I got the AMS.” Your mixes sound really different, but with plug-ins, I don’t know if I can say that. Maybe with reverb. I think that plug-in reverbs and delays make a night and day difference, but not compressors and EQs, those seem more subtle to me, at least in the way I implement them.
I gather you use quite a few plug-ins.
I think a lot of newer people that would see my sessions, like young kids that grew up in the digital era, would not be at all surprised by how many plug-ins I’m running. But I think some of the older school folks that transitioned into being in the box would be like, “Oh you don’t need that many plug-ins, that’s too complicated.” Like my vocal setup, for example, I have ten different sends that are dedicated just to vocals. I have two different chambers, one hall, two different plates, a ping-pong delay and then a couple of really crazy throws, you know, things to throw a word to get a really weird effect. And I don’t necessarily use them all at once, but having them as an option, ready to go. Maybe I have a bridge, and I want it to sound different, instead of having to create this new thing, I already have my palette ready to go.
What reverb plug-ins do you use a lot?
I really like the Valhalla reverbs. I really like Waves H-Reverb, which is a real hog for processing, but it’s a really good one. I like Ultra-Reverb by Eventide.
What about UAD?
I do love UAD. And I do think that UAD stuff sounds incredible. But I don’t like being tethered to hardware for my software, that’s just frustrating. If I’m in a studio that has UAD, I absolutely use it and love it. But I try not to depend too heavily on it because my love of being in the box, and being able to do it anywhere. If I get a call for a revision and I’m on a plane, I can open up my laptop, and I’m just like, “No problem, let me pull it up.” The more hardware you depend on, the more you’re getting into the territory of, “Why are you in the box?”
What about compressors? Do you have a favorite plug-in compressor, or do you use a lot of them?
It’s definitely a lot, but believe it or not I rely really heavily on R-Comp [Waves Renaissance Compressor]. I feel like it’s really flexible and there’s something about it that kind of sounds pop.
The way that the automatic release works allows you to get quite a bit of compression, and makes things sound really hyped, and just working with it for a lot of years. I kind of feel like when a new thing comes out, I end up using that also, but it doesn’t end up replacing R-Comp for me. I use a lot of hard limiting, too. Which is another one of those things that is a somewhat more modern concept. The mixers of the '70s didn’t think about it as much. Are you trying to sculpt the sound? Or are you trying to make it louder? Or are you trying to affect the depth from front to rear? Or all three of those things? These days, a lot of times, you get these files that are like, “This sounds great, but I’ve got to pull it forward.” And limiting is so great for that, where you don’t really want it to sound that different, but you want to be able to put reverb on it and still have it in your face. And limiting the hell out of something and throwing effects on it is a very specific sounding thing that I play with a lot.
You’re not using a brick-wall limiter to avoid going over?
No, I might be limiting something 12 dB, but it’s quiet in the mix. I’m limiting it just to get the sound, just to get it to do what I’m trying to get it to do. Maybe I’ve got the limiter at the very end of the chain. I might have seven or eight plug-ins behind it. Sometimes I’ve got ten plug-ins running on a track and then I’ll bus to an aux to put a couple more plug-ins on it. [Laughs]
You must have a serious computer to handle all this.
Yes, I do. And there is talk of computers really coming a long way next year. I think they resolved some thermal issues that prevented computers from going beyond current processing levels, but they’re talking about 28-core machines now.
And they think that it’s about a year away from being on the consumer market.
Great, now we’re all going to have to buy new computers.
Thanks a lot. [Laughs]
But I think that’s kind of an interesting dialogue to be having, is how technology changes a creative market. You know, over time, faster computers create faster technology and then that faster technology ends up creating different art. So, something that seems kind of geeky, and not that interesting, like, “Dude, 28 cores, ” over a period of five to ten years, can really impact a generation.
What would having a super-duper fast computer could do? I guess I would allow people to do a lot more manipulation.
Right, I think we’d be ready for a different audio format, probably higher bit rates. You know, there was all this hype around faster sample rates, and I don’t think that the difference is night and day.
It’s definitely not.
I do think going into higher bit rates is significant. You know, when someone sends me files in 16-bit, just because they didn’t know any better, it’s really a struggle to get top mixes out of everything running at 16. And, 32-bit floating-point isn’t really that impressive to me, either, but I think if we had some new audio formats, that could be a productive change—maybe just higher resolution plug-ins in general. A lot of these plug-ins, like a really complex reverb, for example, can use up 30% of my CPU. I think it’s possible that a lot of these plug-in manufacturers have these big ideas, but they realize it’s not time, yet. They can’t be implemented with the technology we have today. I think the most interesting stuff is the stuff that I can’t even imagine because it’s not my field, for example the people that spend the same amount of time that I devote to mixing, coming up with these incredible new ideas in tech. What will they be able to do with computers that fast?
Besides limiting, what else do you do to make things move in that direction, or forward or back?
EQ is definitely a big part of it and automating things. For example, pre-delay on reverb. Depending on how long the reverb is, it can make a really noticeable difference in perceived depth.
Explain that a little, because that’s one of those things that, I think, a lot of people, myself included, find it a little mystifying, is the whole pre-delay thing. I mean, I understand how it works, but I don’t understand how to use it. What is your technique for applying pre-delay in your reverb?
Right, it has to do with the way our minds work. Let’s say you’re standing in a gigantic church, but the nearest wall is 100 feet away. If you belt out a big note, in that instant before that first reflection comes back, your brain has no idea how big the room is. And by the time that first reflection comes back, your brain does all these calculations and says, “Okay, not only does the sound have this long tail of reflections, but it also took this long to get back to me, so I must be in a pretty large space.” It’s very confusing, then, to have fake reverb, with this very long tail that’s like a cavern, maybe an 8-second tail, but, it starts immediately the second the sound starts. So I do use that as a creative tool sometimes. But there are also times where I want the vocal to fit in the mix, but sound dry. If you actually take a dry studio vocal done in a booth, and crank it in a mix, to try and get that dry vocal sound, it just sounds like you messed up. Like, you muted the effects or something. But, if you swim it in effects to try and get it to fit, it ends up sounding really wet, and maybe far away. If you kind of play with delays, and play with sending those delays to your reverbs, and then play with pre-delay, and tweak it all just right, then you get this vocal sound that if you solo, you think, “Wow, that’s a wet vocal!” But, if you un-solo it, it just fits.
It sounds really in-your-face and forward. And then also the relationship of that with your limiting and your compression. Sometimes little kisses of distortion can bring things further up in the mix, as well. So that’s something that, depending on the genre, sometimes I play with. A great example is a bass. A really smooth, warm bass can sound kind of far away like you’re bathing in this rich low end. But maybe in a particular section you want to notice that bass, you know, you want to be able to pick it out in the soundstage and have it right in your face. Just a little bit of distortion is all it takes sometimes. It’s not like you’re going, “Oh, that’s a distorted bass.” It’s there subtly.
Right, yeah, just a little overdrive.
In the analog days, you may not have known that you were doing it, but anytime you were lighting up those meters, that’s what you were doing. One of the changes in workflow in the box is that pushing your headroom is not an option to color your sound. Because, when you start to do so, it’s ones and zeros. It’s good, and then it’s bad. So you have to use plug-ins to get that saturation. And that’s also part of how you get in these big plug-in counts. Maybe I want to hit this plug-in really lightly, to get a really sterile, clean sound. But maybe there’s another plug-in that I want to hit really hard, all on the same channel. Gain-staging is just as important in the digital domain as it was in the analog world. You might have very different reasons for doing it, but it’s still essential.
With a compressor, when you hit the input harder, the compression will be more pronounced. Do you find that with other types of plug-ins as well?
Yeah. So many of these plug-ins today, are modeled to saturate in one way or another. A great example would be [Soundtoys] EchoBoy, which is something I use a lot. If you’re using EchoBoy with the return fader at zero, and you do your whole mix that way, it sounds very different than if you get those same sounds, but your return turned down 6 or 12 dB. Then, you start playing with the saturation knob, as well, it’s all gain-staging. The delay may be coming back at the same level, but you’re hitting the input differently to make up for the fact that your return fader is turned down.
And that works on a per-plug-in environment, as well. You know, some of these plug-ins are just clean, clean, clean, and you can play with gain-staging all you want, and it’s not going to go anywhere. But a lot of them have some saturation of some kind written in for them, particularly the harmonic plug-ins. For example, Decapitator and Radiator from Soundtoys. FabFilter Saturn is another cool one, as is [Crane Song] Phoenix. If you stick one of those plug-ins right in the middle of your chain, it can have a big effect on minor changes that you make before and after.
Do you use a lot of plug-ins on the master bus, too?
I do, I do, a lot.
Why is that not a surprise? [Laughs]
I find that it’s not mandatory, you know. In fact, I would say to those new mixers out there: You’re better off not running a 2-bus chain if you don’t know exactly what it’s doing for you. You need to be able to turn on a plug-in on your 2-bus chain, and know exactly why you have it on. With that said, if you’re mindful about what you’re putting on it, a 2-bus chain can significantly cut down on your hours. Let’s say you’ve got a rough mix up, and you’re thinking that you want the whole thing to be compressed more, to be limited more, to have some parallel stuff going on, to be saturated a little bit, and be a little bit brighter. You can just slap all that right on the 2-bus, and you just saved yourself two hours.
Thanks very much, David. Very cool stuff!