Fairlight flex bus and automation

Get answers to your questions about color grading, editing and finishing with DaVinci Resolve.
  • Author
  • Message
Offline

cschneider

  • Posts: 26
  • Joined: Sat Apr 18, 2020 9:52 pm
  • Real Name: Christian Schneider

Fairlight flex bus and automation

PostFri Apr 09, 2021 7:04 am

Dear all,

I switched from Premiere Pro do DR16 and now upgraded to 17. Since switching, I am trying to create the workflow I was normally using in Premiere or something equivalent to it concerning audio dubbing. For illustration, see image.
audio.png
audio.png (210.28 KiB) Viewed 942 times

I have several audio tracks - for example, one stereo track containing the audio from the clip, one stereo track for music and so on. In Premiere, I grouped these tracks into one submix track (which also works in DR by using an additional bus). Additionally, I added a voice over track not included in the submix. Therefore, at least in Premiere, I was able to reduce the volume of the track group (=submix) individually where needed for dubbing.

In DR, it is "only" possible to automate the fader levels, what creates hard "jumps" in the audio volume. What I am looking for is a sort of hull curve (see the second audio track) where I am able to set 4 points and create a smooth transition.

I know there are automation features allowing to smooth this "jump", but it is a long process which creates several key frames on the automation curve and thus is a bit inconvenient.

So to make it short: Is it possible to have a hull curve to manage the volume of a group of tracks?

Thank you in advance, best
Christian
Resolve 18.5 (Studio)
Mac Studio
Offline

Reynaud Venter

  • Posts: 5023
  • Joined: Sun May 28, 2017 9:34 am

Re: Fairlight flex bus and automation

PostFri Apr 09, 2021 3:18 pm

The most efficient (and fastest method) of automating VCAs (or any touched control) across scènes is to use Preview mode and Fill Range to write the current parameter's value across the Range specified.

Suggest assigning custom keybindings to the Fairlight automation commands if you are exclusively using Resolve's UI.
Offline
User avatar

Phil999

  • Posts: 406
  • Joined: Tue Jun 11, 2019 11:12 am
  • Real Name: Philipp Straehl

Re: Fairlight flex bus and automation

PostFri Apr 09, 2021 4:13 pm

automation may create many keyframes, but deleting unnecessary keyframes and adjusting the ones that you need, is a good workflow.

For narrator or dialog audio there is an option of 'ducking' the other audio. I had to do this recently, and it works fine in Fairlight.
Offline

cschneider

  • Posts: 26
  • Joined: Sat Apr 18, 2020 9:52 pm
  • Real Name: Christian Schneider

Re: Fairlight flex bus and automation

PostSat Apr 10, 2021 8:39 pm

Thank you both for your valuable input! Sorry, I have seen that I mixed up dubbing and ducking ;)...so ducking is that what I was looking for. I know there is an automated process, and will try if it is sufficient for what I am trying to achieve. As far as I understood the process, it is not possible to fade out the original sound one second or so in advance before the dialog starts...

Can you point me into the right direction what you mean with "custom keybindings"? Do you mean keyboard shortcuts?

Thank you again, best
Christian
Resolve 18.5 (Studio)
Mac Studio
Offline

Jason Conrad

  • Posts: 797
  • Joined: Wed Aug 16, 2017 3:23 pm

Re: Fairlight flex bus and automation

PostSun Apr 11, 2021 1:45 am

Page 3256 of the Aug 2020 user manual explains how to use sidechain compression to automatically "duck" one track based on another. For the longest time, that was one I had to look up over and over until it finally sunk in. I still don't really get what a "sidechain" is. I think it's mostly the terminology that throws me. "Aux?" "Bus?" "Send?" Such bizarre nomenclature in the audio world, and WHY? You don't see Fusion artists naming every noodle. Sorry. [/rant]
-MacBook Pro (14,3) i7 2.9 GHz 16 GB, Intel 630, AMD 560 x1
-[DR 17.0 Beta9]
Offline

Reynaud Venter

  • Posts: 5023
  • Joined: Sun May 28, 2017 9:34 am

Re: Fairlight flex bus and automation

PostSun Apr 11, 2021 7:51 am

cschneider wrote:Can you point me into the right direction what you mean with "custom keybindings"? Do you mean keyboard shortcuts?
Correct - assigning custom shortcuts.
Offline

Reynaud Venter

  • Posts: 5023
  • Joined: Sun May 28, 2017 9:34 am

Re: Fairlight flex bus and automation

PostSun Apr 11, 2021 7:51 am

Jason Conrad wrote:I still don't really get what a "sidechain" is. I think it's mostly the terminology that throws me.
Also known as a Key Input or Detector circuit which controls attenuation.

All dynamics processors feature a side chain circuit. For example, a "Feedback" Compressor uses its own output to compute the required gain reduction, whereas a "Feed-Forward" Compressor places the detector prior to the Voltage Controlled Amplifier where the input signal is analysed in order to control gain reduction. Not all dynamics processors provide access to this side chain circuit.

Blockchain also uses the term “sidechain” to describe a secondary parallel blockchain which is connected to the primary blockchain.

"Bus?"
Audio Production Consoles often use the term “Sum” - a “carrier” channel of multiple source signals which are routed to the channel and summed, providing overall control and routing of multiple signals.

"Aux?"
Auxiliary - sometimes termed Effects Send or Effects Return.

Essentially a secondary output of a source channel that is routed to a subsidiary channel or destination often to assist the source. Literally an Auxiliary.

"Send?"
Sending or routing a signal from a particular point in the path to additional destinations with independent control over the signal to the source channel's output.

The “Send”, for example, controls the signal to the Auxiliary Buss (via independent gain, mute, pan, and pre/post-fader selection to the source channel).

The “Return” can be the fader on the Auxiliary Buss when the Auxiliary Send routes signal to an external device and returns the signal on an Auxiliary Buss - the fader controls the return or processed signal.
Offline

Jason Conrad

  • Posts: 797
  • Joined: Wed Aug 16, 2017 3:23 pm

Re: Fairlight flex bus and automation

PostTue Apr 27, 2021 12:28 am

Reynaud Venter wrote:
Ha! Thanks for trying, but my brain will just "route" that information in one ear and out the other.

I mean, it seems to me like shorthand for structures identified by graph theory, but ones whose practical usage is common in the audio world. If that's the case, then I wish we'd use the mathematic nomenclature, which is usually fairly consistent, or at least tries to be.

Could graph theory nomenclature and notation sufficiently describe audio signal routing in a meaningful way? I honestly don't know enough about either to say for sure.

When you start talking about carrier signals and modulation, you're talking about two signals traveling over the same connection. I *suspect* a graph theorist writes and thinks about this as two different connections between vertices, but an audio engineer thinks about it as a single, physical signal path -- in a way. I mean, I know the audio engineer *understands* that there are two signals, implicitly, but I suspect that the language he uses is biased towards physicality, if that makes sense.
-MacBook Pro (14,3) i7 2.9 GHz 16 GB, Intel 630, AMD 560 x1
-[DR 17.0 Beta9]
Offline

Mattias Murhagen

  • Posts: 298
  • Joined: Mon Nov 27, 2017 3:09 am
  • Location: New York

Re: Fairlight flex bus and automation

PostTue Apr 27, 2021 3:22 pm

Jason Conrad wrote:Page 3256 of the Aug 2020 user manual explains how to use sidechain compression to automatically "duck" one track based on another. For the longest time, that was one I had to look up over and over until it finally sunk in. I still don't really get what a "sidechain" is.


In addition to what Reynaud wrote a sidechain can come from a source that is external to the processor. "Normally", the compressor would have an input signal that is processed and then output. The threshold that determines at what level the compressor starts working detects the input signal one way or another.

By instead using a "sidechain" you can have a different source trigger the compression. So you could place a compressor on a channel that contains music/effects and instead of letting it trigger based on that input you would instead choose dialog/narration as a source - received using the sidechain function. So now the level of the sidechain (dg/nr) triggers compression of music when the level of the sidechain is above the threshold. Or in other words: If you set the threshold relatively low all dialog/narration will duck the music/fx.

drkfuture1 wrote:I think it's mostly the terminology that throws me. "Aux?" "Bus?" "Send?" Such bizarre nomenclature in the audio world, and WHY? You don't see Fusion artists naming every noodle. Sorry. [/rant]


Those things need to be labeled differently because they all perform different functions. I'm sure there are subtle differences in how you can 'combine' two images in Fusion, like straight up addition, perhaps multiplication, subtraction etc. Point is you have two sources and one output. Just calling that "addition" would trip you up the second you try to explain to people they should multiply instead of add... or whatever... clearly I'm not a video/fx guy.
Jason Conrad wrote:it seems to me like shorthand for structures identified by graph theory, but ones whose practical usage is common in the audio world. If that's the case, then I wish we'd use the mathematic nomenclature, which is usually fairly consistent, or at least tries to be.

Could graph theory nomenclature and notation sufficiently describe audio signal routing in a meaningful way? I honestly don't know enough about either to say for sure.


There's really zero reason for doing it. While it would make sense to some people it'd just confuse every single audio engineer out there. Ultimately we still treat the vast majority of audio work we do within NLEs and DAWs as if it's still within an analog physical workstation, and for good reason. We started out there and we still have to abide by some physics that makes it logical to continue that 'analogous' way of representing information. It doesn't make much sense to revise nomenclature at this point.

Jason Conrad wrote:When you start talking about carrier signals and modulation, you're talking about two signals traveling over the same connection. I *suspect* a graph theorist writes and thinks about this as two different connections between vertices, but an audio engineer thinks about it as a single, physical signal path -- in a way. I mean, I know the audio engineer *understands* that there are two signals, implicitly, but I suspect that the language he uses is biased towards physicality, if that makes sense.


Carrier and modulation results in one signal, but I think it might be productive to think about it as being an encoded signal. The modulation is what is encoded into the carrier. You then decode at the receiving end.

But that's probably a bit off topic.

Return to DaVinci Resolve

Who is online

Users browsing this forum: Andrew Kolakowski, Google [Bot], mpetech, panos_mts, pperquin, webisodes and 183 guests