Over the course of his impressive 20+ year career, James David Redding III has worked as an ADR recordist, Foley recordist, sound effects editor, sound supervisor, re-recording mixer and sound designer. His work can be heard in some of the most popular TV series’ and movies of recent years such as The Patient, Mr & Mrs Smith, and City On A Hill, to name just a few. In 2021, he was awarded a prestigious Primetime Emmy award for Outstanding Sound Editing on the Netflix hit series The Queens Gambit.
We heard James is a big fan of our intelligent reverb matching plugin Chameleon, and has also relied on dxRevive to overcome the challenges of poor quality audio, so we caught up with him to find out more…
They were the exact opposite. My mother was a librarian and my father was a professional engineer in the States, he designed radar systems. I was musically inclined though I wouldn’t say I’m very musical. I went to school as an English major for one semester and then transitioned over to audio and making noise.
It wasn’t until my junior year of college that I did my internship. I was interested in audio post production, I didn’t quite understand it but I just applied to Danetracks Studios and interned with Dane Davis on The Matrix. It was really awesome, he had done Boogie Nights by then and The Abyss and a bunch of great projects. I walked into the studio for the first time and the first shot I saw was a green screen shot of Keanu hanging upside down, being pulled out of the goo, and I’m like, ‘what the heck is going on?’ because none of the visuals were done yet – the Wachowskis were still shooting in Australia. It was just really cool. I got to learn some of the best techniques, some of the best ways of working from a master right away.
My internship went to the first temp mix. I got to sit with the Wachowskis when they first got back to the States after shooting and it was great. We blew up televisions, we destroyed a bathroom and recorded it, we came up with 5,000 different ways of making whoosh sounds, it was cool.
The break moment was after I graduated. One of my mentors at Ithaca College pointed me to Ron Bochar, who was running C5 in New York City, which is a great sound editorial house in post production. His buddy, Danny Caccavo, was working in a studio called Sync Sound and he’d also graduated from Ithaca college and was like, ‘hey, yeah, I’ll get you a job’. I started as a night studio assistant at Sync Sound which was where they were doing Oz and a bunch of MTV animation, like Daria and Beavis and Butthead and Celebrity Deathmatch.
I worked under Ken Hahn for a number of years and within six months of me being there, I got a little antsy, as I think most kids do. Ken saw the potential in me and moved me up to be a mix assistant and I just kept going up from there. I was there for 13 years working on a lot of different things, and that’s where my ‘renaissance man’ quality comes from. I bounced around at the studio – some days I was recording Foley, some days I was editing dialogue, some days I was mixing and it really gave me a good taste of everything. I learned from a bunch of masters there – Ken Hahn’s dialogue mixing is amazing.
I do. I like stretching creatively and I find a challenge or knowledge path on every part. I like it when people offer a dialogue to edit so I get to run my dialogue chops and my dialogue noise reduction ways, and then there’ll be a documentary to work on and I’ll consider what the challenge is in that, so, yeh, I like having variety because each one gives you a different way of looking at audio so I take all the aspects of each one and apply it to the next, which is really fun. That’s how I came across Accentize. People were telling me about Chameleon and I had a dialogue editing job which was a documentary about Lincoln and they had these recreations in it and they wanted to cut off sentences at certain points. I was just like, ‘oh crap’. They’re in an old wooden church making this big proclamation and then they cut the sentence and, of course, it had this natural verb, and I’m trying everything to match it. Then I remembered somebody had mentioned this thing called Chameleon. I downloaded it, demoed it and bought it right away. It was so freaking amazing how it tailed off the sentences perfectly. Then I started applying it to ADR, which just made things so much easier between EQ matching and then the reverb matching.
There was another time when I had heard this crazy verb in a video of somebody in a tunnel and Tom Fleischman, who is mixing royalty, said ‘I wish I could get an impulse response for this’. I was like, ‘hey Tom, I had them install Chameleon at Soundtrack, I sampled it for you and here’s a Chameleon impulse response for it’. That ability of being able to take these amazing sounds that are in nature already and being able to match them verb-wise was just fricking fantastic.
You can apply Chameleon to so many things. You can put it on your dialogue. You can put it on your foley. You can put it on your effects. I used it on Mr and Mrs Smith – I had a great sound of a door, but it was getting really hard to make it feel like it fit in the scene, so I went and sampled the production dialogue and then applied that to the door sound effect.
The thing that I love about Chameleon is just the ease. I remember when Space (Avid reverb plugin) first came out, and I remember making impulse responses for that by going and doing transients and stuff. But then trying to get it to load in and get Space to work properly and easily was a pain in the butt for matching stuff, but with Chameleon, it’s two button clicks and there you go. I also love the fact that you have the organization of the whole library so I can hear all the ones that I’ve sampled throughout my career, which is awesome, and it’s searchable – as long as you can remember how you name things, you’re golden!
Yeah, that’s what we did with The Patient, we used Chameleon on that a lot. We used it on Alaska Daily. I had the dialogue mixer get it so that he could match all the ADR for Alaska Daily, which was a one season ABC show that we did last winter.
I find I hardly have to do it. I usually find the only time I have to adjust it is when the original sound was maybe too much, like when you’re trying to match ADR and then you’re like, oh wait we’re backing that off with other tools somehow. Most of the time it’s a case of learn, apply, and that’s one of the things I love about it; I don’t have to reach for the other controls to make it feel right. I’ve played with them just because they’re there, but otherwise no, not much at all.
Speed is nothing if the quality sucks, but the fact that you get high quality out of it and you can do it so quickly—it's a game changer
It’s the speed and the quality. Speed is nothing if the quality sucks, but the fact that you get high quality out of it and you can do it so quickly. Even the learning part of it, you think it’s got to learn a lot, but no, you just highlight a section, hit learn and have it loop. I’ve had to do it on small little sections and been able to match really well. For me, it’s that efficiency thing.
The common challenges are just overall bad recording techniques, even on the sound effects side. I use noise reduction on my sound effects too, sometimes I’m out and hear this really cool thing and I have a little Zoom mic that goes on my phone that I love, but when you listen to it there’s all this other noise on top of it and I use noise reduction on it. So, it’s usually just bad recording techniques that are the biggest challenge.
I’ve worked on a feature film that the production mixer for some reason had compressors across the mics, they shot in New York and it was horrible. I was able to get it so that you could hear it, but then you’d have one actor who, for dramatic effect, is talking low in volume and amplitude and you got to try to dig that out and sometimes you just can’t. So, the biggest challenge I find is just people not knowing enough about how audio works and then not giving enough attention to it when they are working with it in recording.
I thought it was great. It was really interesting what I heard it doing. It did something that none of the other tools was doing as far as the synthesis. One of the biggest problems we have with noise reduction is that you suck out qualities that you don’t realize you’re sucking out. You’re sucking out these things that you think aren’t important because it’s almost like you’re just throwing a high pass, low pass filter across your dialogue, but you’re sucking all this life out of it. What I loved about Accentize was that it put it back in and my first demo of it was for a game show pilot and they recorded the game show announcer, but it was really a producer on his iPhone and it sounded like ass. They were fine with it because it was a pilot and they were just trying to sell it. I was like, let me demo dxRevive. I put it in Studio mode and holy crap, besides the fact that the enunciation wasn’t perfect, it sounded almost like a booth recording.
The best piece of advice is to realize that you never stop learning and that you never know it all...
The best piece of advice is to realize that you never stop learning and that you never know it all, and, as much as you can have confidence in yourself, don’t use that to stonewall people. When I first started in the industry, I had a Bachelor of Science degree in Audio, who knew more than me at that time?! I remember the first time that I was working as a studio assistant, one of the jobs was making sure that every studio had sharpened pencils and that they had enough cups for coffee for the clients that were coming in that day. I just grabbed some cups and put them on the tray and the studio manager was like, ‘that’s the wrong answer, you didn’t follow our directions’. You learn from that, and you get pushed down in your place and it can hurt your ego bad. Sometimes you need that, but you can also avoid that if you just realize that you’re going to learn no matter what the task is.