Talk about an impressive career—Ed Cherney has produced or engineered a veritable who’s who of iconic musicians including Bonnie Raitt, Eric Clapton, Iggy Pop, The Rolling Stones, Buddy Guy, Bob Dylan, Ry Cooder and countless others.
Cherney has also engineered and mixed music for movies like The Hobbit, Coming to America and The Bourne Legacy. He’s earned a shelf full of awards over the years, including three Grammys, an Emmy, and five TEC awards.
Recently, he was featured in a Mix with the Masters video tutorial series, sharing the techniques he used mixing the recent album by The Rides, featuring Stephen Stills, Kenny Wayne Shepard, and Barry Goldenberg.
Audiofanzine was fortunate enough to get a chance to speak with Cherney recently. The interview covered topics such as how he got his start, his mixing techniques, his thoughts on plug-ins that emulate hardware and much more.
You’ve been doing this a long time.
My whole adult life.
How did you get your start?
I got out of college and I thought I was going to go to law school. I’d been a musician, but never thought of being a professional one. But I had some friends that had a band, and they were going on the road, and they asked me if I would drive the truck and roadie for them. And I said, “Hell yeah.” And I’d always been an AV guy, and always had a great deal of interest in stereos and gear and that stuff. But they asked me if I wanted to go on the road, and I said I did. And I started driving their truck and humping their gear, and then a little while into it, the soundman didn’t show up, and I ended up mixing the band. Not really well, [laughs] but I enjoyed it, and I realized I had a natural ability to balance music, for some reason, especially vocals. They went into a studio later in the summer and invited me down, I’d never been in a studio. I walked in and a bell went off down in the middle of my soul. “Oh, oh wait a minute—OH THIS!”
To hell with law school! [Laughs]
To hell with everything, this is what I need to do. This is what I’m going to do. And eventually, it took three years, but instead of going to law school I went to electronics school. I didn’t want to be a tech or anything like that, but I certainly wanted to know how things work. And I eventually got a job as an apprentice at a studio in Chicago. And it was a real apprenticeship.
What they’d call an “intern” today?
I got paid. I got like $2 an hour, and I don’t think I went home for three years. It started out cleaning bathrooms, cleaning the headphones, running for food, and more cleaning. But they started bringing me along. I started assisting the assistant, setting up, and learning how to make dubs. That’s how they did it, a lot of studios were doing jingles during the day and records at night. I was learning how to use the dub machine and make transfers. And how to listen critically. And the guys who were the engineers, a lot of times were the guys that invented the gear you were working on—they built the consoles and helped design the tape machines. So, I started sitting behind these guys, and it was a true apprenticeship. I remember we had classes, small classes. And I remember, if you couldn’t get a lunch order right, you weren’t moving up to the next thing. I spent years doing that, and then worked my way up to being an assistant. Then I heard a Boston record, and my girlfriend moved out to California. So, I went through the Billboard directory backwards, and started at Westlake Audio and got a job at Westlake Studios in Los Angeles, as an assistant. And I think one of my first gigs was assisting Bruce Swedein and Quincy Jones on Michael Jackson’s Off the Wall record. And Bruce had been a mentor of mine and a friend in Chicago, and I ended up working with Bruce and Quincy for about eight years.
Not a bad pair to learn from.
Ya think! [Laughs] Yeah that’s like getting a PHD. [Laughs] And you get to learn about a lot of things, but you still got to spend your time sitting behind the console being the “cat” and not the assistant, and making stuff work. Theoretically, you know what’s supposed to happen, what a great record is supposed to sound like and feel like, but actually doing it is another thing.
Do you think the technical background you got from going to school was helpful?
In a way.
Do you think that’s true today? Does it help having a tech background?
Yeah. You should know algebra. You should know the speed of sound, I guess. [Laughs]. I don’t know, there’s a lot of skills that you need, and it’s absolutely a problem-solving gig. First of all, you need good mental health if you’re going to be successful at it. Unless you’re in a room by yourself doing things, then you can be as crazy as you want to be. But if you’re going to collaborate and work with other people, you need good mental health. You need to be able to relate to other people. You need to be able to know how to make people feel good, and how to negotiate, and how to compromise, and things like that. And how to inspire other people, and get your own ego out of the way. You need those kind of people skills. And then you need computer skills obviously, and musical skills, and then there are things like taste and talent. And there’s no guarantee that you can go through all the steps and go to all the schools and do everything right that you’re going to be successful at it—that you’re going to have a career at it.
Talk a little about your recent tutorial video series.
Mix with the Masters has a studio called La Fabrique in Provence in France, where they give seminars. Weeklong seminars. They’ve been doing them for probably ten years. And they bring in guys like Manny Marroquin, and Al Schmitt, Joe Cicerelli, Tony Maserati, and Michael Brauer. They bring in like 10, 15, or 20 people, who come in from all over the world and spend a week doing the seminar. They record a band and mix it. And there are a lot of components to it, and it’s a pretty good hang. I was approached a while ago to do one of those, but for some reason I never did. But in the meantime, they’ve been doing a series of videos, and they asked me to do one.
How many parts is your video broken into?
I think it’s seven segments. They chopped it up into seven segments. We did a whole day. It took us about eight or nine hours to do it.
Did you have to script it out at all, or did you do it off the top of your head?
I kind of did it off the top of my head. I kind of knew what I was going to do, and put the song up, and started doing it. It was “jazz.” [Laughs]
One of the techniques you talked about, which was very cool, was how you had a guitar to one side of the mix, and you put a mono reverb and panned it to the other side to sort of get the guitar to feel more natural in the room. Is that a pretty common technique that you use?
You know what, I think it is. It goes back a long way and I stole it from a guy named Val Garay. Val was Peter Asher’s engineer, who did all these great records for Linda Ronstadt and James Taylor. And I remember hearing those records and I loved how they sounded. And that was something he did years ago, and I think maybe it was probably a pretty common way of doing things. But over the ages, it kind of got lost a little bit, but it was something I did. Sometimes you can hear things too well, and if I had a guitar or instrument panned all the way to one side, it was kind of cool, but sometimes it didn’t always blend with the rest of the music. I just try to put everybody so they exist in a space together, so that it’s happening together and everyone playing in the same kind of space. I think these days, when I use reverb, I’m using more discrete things. And you change techniques over the years. Actually, you change your techniques from record to record and even song to song.
When using this technique, do you try to match the mono reverb’s positioning to the track on the opposite side.
It seems to work best when they’re panned hard.
I assume you’ve done that with delay, too?
Absolutely. That’s a great way to get depth and to get things to stand out. And the amount of delay has everything to do with it. You can get things more 3D, almost kind of surround, using two speakers and using delays being thrown to different sides.
You keep them kind of subtle, so the listener is not going, “Oh, delay, ” they’re just feeling the space.
Unless it’s a real effect you’re trying to do. If I’m trying to create a sense of space and a soundstage, I’ll be fairly subtle with it. You’re not necessarily hearing it as a delay, but you’re feeling it as a sense of depth. That they’re in a space, that there’s height and there’s width and there’s depth.
These days, you’re mixing for so many different types of systems. Do you ever check mixes in MP3 format, or by trying to hear how things are going to translate to different listening environments?
Yeah. I never thought I’d do this, but I’ll email myself a mix and play it back off my iPhone. Sitting in the car or whatever. And I’ll listen through the iPhone speakers, and I’ll listen through some shitty transducers, and I’ll listen through some good headphones, and I’ll even plug the iPhone through some big speakers and listen to the MP3. So absolutely, it’s your fiduciary responsibility to listen in all those kinds of ways, and find a way that it works in every format.
It’s interesting though, a really good mix seems to sound good everywhere.
That’s exactly right—it sounds good in mono, it sounds good everywhere.
When you think about it theoretically, that doesn’t totally make sense, because iPhone speakers or earbuds are going to have very different frequency response then good quality speakers or fancy headphones or whatever. Yet somehow there’s this magical thing where when you get a mix really good that it does translate.
Yeah. That’s always been the case. For example, trying to mix a tambourine or a cowbell or a triangle or some percussion instrument—a shaker, or the bass guitar, or like a Minimoog in some places. The bass is too loud, the tambo’s too loud, the tambo’s too soft. I can’t hear the triangle. It’s too loud. It’s still that kind of thing, and those balances will change depending on the quality of the speakers you’re listening to. But still, if it’s a great mix, it’s a great mix and it stands up. I think there’s an element of magic involved. A great performance transcends just about everything. On the other hand, a great recording and a great mix of a shitty performance is still a shitty performance. But if it’s a great performance and a great song, you can hear it through a kazoo, and it’s going to sound great. And it’s going to move you, the music is going to talk to you.
What do you notice is the most problematic aspect of the home-recorded tracks you hear.
I have my own mix room at the Village [in Los Angeles]. And it’s digital based, but I’ve got a console and a lot of good gear. But a lot of times I get stuff to mix that’s been recorded at home. And I had a vision of what hell was, and that was sitting up in this room for perpetuity, for all time, and people are just sticking drives under the door and I’m putting them up, and it’s a style of recording and I call it—and it’s a paradox—hard and dark.
A lot of times, people record the instruments individually, rather than at the same time. Maybe not even a whole drum kit. Maybe it starts with a kick and a snare. Or some kind of percussion. Or some kind of loop. But because it’s being done separately, a lot of times they’re trying to make every instrument too loud, and too compressed and too EQed. And in the act of doing it, it sounds all muffled and at the same time it’s so bright that it curdles your eyeballs.
Are you talking about how the tracks were recorded or how they were trying to mix them?
I’m talking about the whole thing. Hard and dark. It’s muffled and too bright at the same time, and that’s always the result of being over processed, way over processed.
I would assume you ask people not to print compression and EQ on tracks they’re sending you to mix?
Not so much. I don’t know what I ask them to do. Especially with so many people working on it, everything doesn’t have to be loud all the time. Pop music is different, but with country and country rock and rock music, in particular, when everyone is playing in the same room, the way to record it is to leave room for everyone, and everyone plays with dynamics. They get louder, they get softer, they get out of the way of the vocal in the verse, they come up in the chorus.
Like during a live performance.
Exactly. That dynamic exists in the recording. And when you’re recording everyone separately at a different time, it’s hard to get those things. As a result, the recording, a lot of times is “just make it loud.” Because if I’m hearing just this guitar, then this is going to be the greatest guitar sound in the world. I’m compressing the hell out of it. I’m over EQing it, I’m double compressing it. So, boy, it’s loud, it’s in your face, but it doesn’t make any sense in context with the rest of the song and the rest of the arrangement.
What would you recommend that people do if they don’t have the facility for recording multiple musicians together?
Let things breathe. You don’t have to use every piece of gear and every plug-in on every instrument that you record.
The subject of plug-ins emulating vintage hardware processors is one in which there are a lot of varying opinions. What’s yours?
I think they’re doing a really great job of it, emulating Fairchilds and 1176s, and Neve modules. Universal Audio in particular, is doing a spectacular job with that.
It’s funny, because for a lot of people, myself included, we never had the original hardware, or heard it very much or at all, so it’s hard to judge whether the plug-ins are doing an accurate emulation.
I understand, and I’m wondering for a generation of people, does it even matter?
It does when someone tells you, “Oh, you’ve got to use this plug-in on that.”
The 1176—because Phil Spector couldn’t make a record without twenty 1176s, and actually, they were really great tools. And you could smooth out a performance, get a bass to stand up, to just sit there just right. Same with guitars and vocals. And they brought us something. A certain kind of harmonic content, even a little bit of harmonic distortion. It was very pleasing.
Do you ever use harmonic distortion plug-ins to give a little extra to a track?
All the time.
What are some of your favorites?
SPL does a great job. They’ve got Transient Designer and they’ve got Twin Tube that I really like and I go use all the time.
It’s interesting how just a little bit of subtle distortion can really change the way something feels.
Yeah. It goes a long way to just fill the speaker and make it sound pleasing.
When you’re doing a mix, do you always know when it’s done, or is it one of those things where you have to listen the next day and decide?
Or I just abandon it. [Laughs] Wasn’t it Picasso who said, when someone asked him “How do you know when you’re done with a piece of art?” He said, “I’m never done, I just abandon it.” [Laughs] It’s hard to know when you’re done. And I think a lot of times, and this is the kind of thing, being able to, especially to mix, you may be born with the desire and the innate ability to hear and perceive music in a way to do it, but to actually do it takes a lot of trial and error and takes a lot of time to figure out how. And you make a lot of mistakes to find out what works. You’ve got to mix a thousand songs before you start getting the hang of it. And it also helps having some success, making some things that are hits and get on the radio, and get on people’s playlists and people’s streams. But a big part of the process of mixing, and it was always that way, is “going by it.”
You mean when you continue on with a mix, not realizing you’ve already done your best version?
Yeah. Going and mixing and mixing. Mix 12, mix 13, mix 14. You think you’ve got it and then the next day you come in and you listen to mix 3 and go, “Oh, mix 3 is really good.” [Laughs] Because we start working so hard to, “You know that little guitar lick I played over there, I want to hear that.” “And that little bass line.” “And that little drum paradiddle that I did over there.” And you start hearing all those things, but what happens is the mix starts getting smoother and smoother and more homogenized. And any part of what I call, “God damn joy, ” disappears. So absolutely, part of mixing is going by the great mix. A lot of times in performance too, you can go by it. Guitar parts in a solo. “Oh, go back to that first one that you played. That’s the one that sounds great.”
As a guitar player, I typically think I can record a better solo if I keep doing takes, but usually it’s one of the early ones that’s best. It’s hard to realize that at the time, though.
It’s funny, I’ve heard musicians that really know say, the producer says “Can you play that again?” “Yeah, I could, but then I’ll have to learn the song and it won’t be getting better.” And a lot of times, musicians that came in, and played something before they really learned the song and the changes, sometimes it’s just better and there’s just more goddamned joy there.
You feel like that happens the same with mixing.
Do you have a system for saving in a certain way as you go along so that you have those other mixes?
I do, I save everything. I just call them Print 1, Print 2, Print 3, or whatever it is. And then usually, I’ve got a yellow pad with lots of notes on it, what I did. But it’s important to label the mixes clearly that you send out, because you’re going to get into a thing, especially when you have a band: “Oh I like that mix you sent me.” “Which one?” “Well, uh—you sent it two days ago.” Or, “Yeah, the one you sent me a week ago, that’s the one I really like!” It’s really important that all the mixes you send, are really clearly named, in the file, in the mix.
What was the hardest mix that you ever had to deal with?
They’re all hard, although not as much for me now. Here’s the paradox of it: It’s easier to become a great engineer when you’re working with great musicians that sound great and play great parts. And writers and artists that are doing great songs. And when you’re starting out, you don’t get to work with those people. So, you’re working with people who aren’t as good. A bass player whose E-string is 10 dB louder then everything else, and you’re getting more clicks and fret noise than you are song. A guitar player who sometimes sounds like he’s wrestling a cat, and the cat’s winning. And a drummer whose always on the cymbals, not leaving an ounce of room for anything else, anywhere.
Right, I hear you.
So, a lot of mixes that I was trying to get to sound like Michael Jackson, or the other records I’d been an assistant on, there was no way in hell that I was going to do it. I’d sit there for a week trying to wrestle this thing. How do I get this bass player to sound like Louis Johnson or Abe Laboriel? You know. How do I get this drummer to sound like Jim Keltner
In other words: Garbage in, garbage out.
That’s right. And how do I get these shitty songs to make me want to weep or make me feel something? That’s always hard. But there’s a song Bonnie Raitt recorded called “I Can’t Make You Love Me.” I remember that being a particularly hard thing for me to get, and I tried to mix it a bunch of times until I finally nailed it, and I think I got it at like 4:00 AM, and it was raining. And it’s one of the saddest goddamn songs in the world, anyway. And one of the greatest ballad performances that you’ll ever hear an artist do.
Thanks very much for taking the time to talk to us.