Whether it's producing, tracking, mixing or mastering—Bob Power has done it. He's been around long enough to have seen the changeover from analog to digital, the demise of the record industry, the rise of the internet—you name it. He made his mark in the business with his engineering and mixing of the influential hip-hop group A Tribe Called Quest, as well as his production and mixing work with R&B artists like Erykah Badu, D'Angelo, Me'Shell Ndegéocello, The Roots, and Chaka Kahn; and he's stayed busy ever since. Nowadays he also teaches music production at the prestigious Clive Davis Institute of Recorded Music at NYU.
Power has never been shy about stating his opinions. When we caught up with him recently, he had plenty of insightful observations about music production and music technology.
You got a lot of notoriety for your recording and mixing work with A Tribe Called Quest in the 1990s. What was it about their music that was so groundbreaking and unusual?
What was so special about their music is the same thing that everybody hears. Very sophisticated musical constructions, for that time. They were mostly using samples, which was not that normal back then, in part because the technology wouldn’t support it. Sampling technology was not that developed at that point.
Was that still the era of the Akai S900. That was 16-bit, right?
And the early SP12s were 12-bit.
From an artistic standpoint, what was it that made Tribe Called Quest so compelling?
First of all, like any record, you have to have a great song. And they’re a great example of how great songs can exist in a lot of different forms. It doesn’t always have to be a Billy Joel song. The other thing is they had a unique way of putting the music together. Again, at the time, a sophisticated use of samples where instead of just having one little loop play all the way through, they made these elaborate reconstructions with many different elements that were never meant to go together in the first place. Also, the MCs, [Q-]Tip and Phife, both had a lot of character. It was a great sort of Mutt and Jeff team, and their styles and what they were saying was interesting to listen to. I’ve been remastering some of the stuff for 25th-anniversary releases, and it’s fascinating when I listen over and over again to stuff that I heard a whole lot 25 years ago. It still has a great deal of charm and appeal.
Had the MPC come out by that point?
Not at the beginning. Early on it was just the EMU SP12 and S900
I had an S950, and I remember everything was saved on floppy disks, there were no hard drives then.
Yeah. And if you had something that was longer than the total sampling time, which wasn’t very long, you had to sample it in two pieces and lay it down to multitrack one at a time.
I think it’s pretty interesting the way the technology drives the development of musical styles.
Definitely. There’s a fascinating parallel between the development of technology, of music, and instrument technology and how popular music is made and how it sounds. And it’s nowhere more obvious and evident than in hip-hop where when sampling time became extended, the constructions tracks became more and more complicated.
Right. I always think about dub reggae with the delays becoming such a huge part of the sound.
And those were analog and tape delays.
So how do you think the technology has impacted hip-hop between when you were working back then and now?
Well, it’s a different universe. Everything back then was a big struggle, technically. In the early days of MIDI, the interfaces were really dodgy. Samplers were extremely primitive; sequencers were extremely primitive. There were no in-the-box virtual instruments; everything was outboard. You had to sync to 2” tapes. You had to record it to tape because there was no digital multi-track at the time. The earliest version of Pro Tools was Sound Tools and it was stereo, and it sounded horrible. We were working between 24-tracks, the synth, and the sampler, with extremely primitive sequencing technology. If you think back between then and now where pretty much you boot up your DAW and hit the space bar and everything comes out, it’s an entirely different technical universe.
It’s interesting that what’s happening now is similar to what happened with MIDI at the beginning, it has democratized music making. It has democratized recorded music, where anybody can buy a laptop and in half an hour have something that approximates what we would call a “track”—recorded music. That said, it’s really interesting how after all of this time everything comes full circle; the market is flooded, there’s a lot of stuff out there, most of which isn’t very interesting. So, I guess it comes back down to a compelling performance of a great song. If those key things are your North Star, your true north, you realize that even though everything’s changed, it’s really the same on that level.
Now the haystack is so big that the needle needs a lot of marketing savvy to get noticed. At least online marketing savvy. But you’re right. It’s a different world, and the labels don’t have the impact that they used to have.
Yes, and they’re much more careful, cagey and strategic about how they use their resources. They sign fewer acts, whereas 20 years ago 80 percent of the people who got signed you never ended up hearing about. They can’t afford to do that anymore. The economies of scale sort of caught up to them.
Do you mainly use Logic or Pro Tools these days?
Both. Almost everything I get to mix comes in on Pro Tools. I would say maybe eight percent of what comes to me to mix comes in Logic. Ever since Audio Units were developed, I have my synths and plug-ins available on both platforms. For the most part, the same plug-ins are available on both platforms, so it’s really all the same. And in truth, since Pro Tools rewrote their audio code, everything sounds so good. They’re both floating-point, I believe they have 32-bit internal processing.
Yeah, so do you prefer working in one or the other?
No. I don’t prefer one or the other. While I usually use Logic for MIDI and many productions most of my mixing for clients is in Pro Tools. I’ve been using Logic since it was created on the Atari. At first, it was called Creator, then Notator and my first experience with it was on an Atari 1040 because I couldn’t afford a Mac. That said, it was an amazingly rock-solid platform. Primitive, as everything was at that point, but it had a built-in MIDI port. It was pretty amazing.
Let’s talk about mixing. You now mix in the box, right? That must be a big change?
Yeah. Since I stopped mixing in studios, which is about the last seven or eight years, I was mostly into the box anyway. I don’t use analog inserts on individual channels; I only used an analog two-bus. Now my 2-bus is nice. There’s a Pendulum Vari-Mu, API2500, Tube-Tech EQs, GML EQs, Prism Maselec EQ—there’s some really nice stuff.
Do you still use that?
Not really. I use it for mastering because I haven’t been able to get the same results mastering in the box if I can if I come out then head back in. Although, I hear some of the big mastering guys are starting to stay digital now.
Yeah. I mean, it’s heresy, but I’d be interested in researching it a little more.
Right, right. Do you do a lot of mastering you would say, or is that just a small side?
It’s probably about 15 percent of my work.
And what do you think of programs like Ozone and things where you can kind of pull up a preset and get close?
I think Ozone is a really fine piece of software who are way ahead of the curve on a lot of stuff. Presets are presets, what can I say, they’re for knuckleheads.
[Laughs] And have you checked out the latest Ozone that has the Master Assistant feature that analyzes your music and comes up with a “starting point” for you?
I haven’t upgraded yet.
I’ll have to do so because I’m interested to hear what they did. Ozone was my limiter of choice for a long time, and it was nice that it was all in one box. But the one thing that was difficult about their limiter, which I think sounds fantastic, is that it didn’t have separate attack and release times. It was just one knob, and I was never sure which it was affecting more.
I believe that’s still the case, but they have come out with new limiter algorithms pretty frequently.
Yeah, I think their algorithm III was my go-to for a long time. It really keeps great depth, in a beautiful way. Not having a separate release time is big though, because as you probably know, release time and limiting is 50 percent of the issue with distortion.
Well, talk about that a little because I think people would be interested to know a little more about how that.
Well, this is me and it’s just my experience, but release time is where much of the distortion happens. Of course, It can happen on the attack also. If you’re hitting it really, really hard, playing that time to too fast attack times can completely emasculate the transients. There the release time is important because if it’s too slow, you take all of the punch out of the music. If it’s too fast, that’s often where a lot of the really crunchy distortion takes place—because the limiter’s releasing in a very unmusical way. You often have to time it to the underlying dynamic of the music.
Right. And don’t a lot of limiters have an auto-release setting?
Yes, they do. And in theory, it will read the waveform and adjust the release accordingly.
Do you find those kinds work pretty well?
I always turn it off and adjust the release times manually, so I don’t know. You know it’s funny, with most of Waves compressors, many of the Waves old-school compressors—the C4, the C6, and the Renaissance Compressor all default to ARC, that little button on the upper left. I believe that’s automatic release time [Auto Release Control], and nobody knows that. I find my students messing with the release time and stuff, and that button is still checked. I’m like, “ No, no. You’ve got to uncheck that first.”
If you were advising someone who is putting a studio together today, and they had the budget to get one piece of nice hardware, would you say get a great interface with really good converters or a really nice microphone?
Three things are really important. For general work, your interface and speakers are critical. And with speakers, there’s no such thing as “flat.” Technically speaking, human hearing is not even flat.What you want is that they reveal everything to you that’s there, which most two-way speakers do not; there’s often a hole in the midrange response. Also, the extension on the bottom is often not there. And number two, you want them to reveal everything in a fairly even way across the frequency spectrum. Different speakers do that to different degrees and to different people’s taste. We also want them to give us an idea of how the music is going to translate. But yeah, an interface is very, very important, both for the accuracy of the monitoring and the conversion.
Everyone thinks microphones are real sexy, and they are. We’re in the sort of second golden age of professional audio right now, because they’re making emulations that in many ways sound better than the originals (or at least they work all the time ). But on the microphone end, people get into whether its tube and large diaphragm or small diaphragm and all that. The most important part of your recording rig in addition to your interface is your mic pre-amp, and anybody who’s been doing this for a long period of time will usually say that, as well. I’ve made great recordings with a [Shure SM]57 with API mic preamps because they’re a really great match, they happened to sound excellent together. You can have the best mic in the world, and a lousy mic-pre will undoubtedly muck it up, but a really good mic pre-amp will make a moderately priced mic sound so much better.
You teaching technology now at NYU, right?
I’m teaching production at the Clive Davis Institute of Recorded Music at NYU. Nick Sansano, one of the founders of the department runs the production area. He’s the associate chair and he developed the production curriculum. I’ve been there seven or eight years. First as an adjunct, now as a professor. Most of what I teach is a year-long sophomore production class. It covers everything from arranging to mic pre-amps to musicianship to microphones to production psychology. I also teach an arrangement class that I developed It’s really arranging for the studio; often, things that work live and things that work in the studio are not exactly the same.
Usually. people say if it’s a well-arranged song it will record well, or it will make it easier to mix it, and that should translate live, too. But why would you say that? Are the arrangement attributes that are more critical in a recording situation, as opposed to live?
Well, it’s like a Venn diagram. There’s about a 50-percent overlap. You can get away with some stuff live, but it doesn’t always work so well in the studio. It goes all the way down to the choice of the instruments that you use. For example, a P-bass versus a J-bass, a Strat versus a double-coil pickup for arpeggios and stuff. But live, it’s sort of this big mush where the definition of different instruments declines, and the definition is not really that big a deal. The overall energy is what’s most important. In the studio, however, the definition of the individual instruments becomes much critical. And in the modern world, your choice of instrument and the tonal characteristics of that instrument are just as important as what the part is that you’re playing. Yet a lot of the techniques we use in the studio, if you look back historically, we’re really born out of limitations in the technology or economics. Doubling the horn section isn’t done because someone said, “Oh, this sounds better.” They couldn’t afford eight guys, so they used four, twice. So there’s a lot of stuff like that that is just interesting to think about, as well.
Certainly, the whole frequency-overlapping-muddiness issue is a bigger deal in the studio.
Yeah, in a modern mix, that just becomes much, much more important. On a live gig, you can’t hear that stuff that well anyway, the resolution isn’t that great, so it doesn’t make that much difference. But for example, if you’re doing arpeggios on a track where there are other thick chordal instruments playing, anybody who’s been doing this for a period of time is probably going to grab a Strat or a Tele because the tonal characteristics of that instrument work better for that particular part.
If you have rhythm guitars and a keyboard, you know. I guess that you end up doing a lot of EQing to keep them out of each other’s’ way.
Yes. If you don’t do your pre-production right. You know, when I produce a live band, we usually do at least three, four, five, six weeks of rehearsals before we record, just to make sure that they’re playing the right parts with the right timbre.
Right. So, when you’re doing pre-production are you recording them as you’re doing it as well just to hear what it sounds like at least with a two-track or something?
By the end of preproduction, you should be able to put up a phone in the middle of the room or a 57 or something, and despite the fact that it’s kind of lo-fi, you should be able to hear everything. That means you’re playing the right parts and using the right timbre on your instrument to provide room for us to hear the other ones.
Back to the teaching for a second, do you notice any difference now from when you first in terms of the savvy or the tech savvy of the students? Are they all a lot more literate with production since they’ve grown up with computers?
Yes. The students now are so advanced. Many of them when they come in are self-taught but pretty incredible. And, currently, mainstream pop music is fundamentally electronica with top lines. Because of the nature of the electronic music-making process, you can do that yourself, you can do it on your laptop, and you can start doing that when you’re ten years old. In that way, the production prowess of the students is so much greater every year. They still need to learn about how to use a microphone effectively, how to use air and space around an instrument. The ones that are real sharp with the technology seem to get sharper every year.