Jason Conrad wrote:Page 3256 of the Aug 2020 user manual explains how to use sidechain compression to automatically "duck" one track based on another. For the longest time, that was one I had to look up over and over until it finally sunk in. I still don't really get what a "sidechain" is.
In addition to what Reynaud wrote a sidechain can come from a source that is external to the processor. "Normally", the compressor would have an input signal that is processed and then output. The threshold that determines at what level the compressor starts working detects the input signal one way or another.
By instead using a "sidechain" you can have a different source trigger the compression. So you could place a compressor on a channel that contains music/effects and instead of letting it trigger based on that input you would instead choose dialog/narration as a source - received using the sidechain function. So now the level of the sidechain (dg/nr) triggers compression of music when the level of the sidechain is above the threshold. Or in other words: If you set the threshold relatively low all dialog/narration will duck the music/fx.
drkfuture1 wrote:I think it's mostly the terminology that throws me. "Aux?" "Bus?" "Send?" Such bizarre nomenclature in the audio world, and WHY? You don't see Fusion artists naming every noodle. Sorry. [/rant]
Those things need to be labeled differently because they all perform different functions. I'm sure there are subtle differences in how you can 'combine' two images in Fusion, like straight up addition, perhaps multiplication, subtraction etc. Point is you have two sources and one output. Just calling that "addition" would trip you up the second you try to explain to people they should multiply instead of add... or whatever... clearly I'm not a video/fx guy.
Jason Conrad wrote:it seems to me like shorthand for structures identified by graph theory, but ones whose practical usage is common in the audio world. If that's the case, then I wish we'd use the mathematic nomenclature, which is usually fairly consistent, or at least tries to be.
Could graph theory nomenclature and notation sufficiently describe audio signal routing in a meaningful way? I honestly don't know enough about either to say for sure.
There's really zero reason for doing it. While it would make sense to some people it'd just confuse every single audio engineer out there. Ultimately we still treat the vast majority of audio work we do within NLEs and DAWs as if it's still within an analog physical workstation, and for good reason. We started out there and we still have to abide by some physics that makes it logical to continue that 'analogous' way of representing information. It doesn't make much sense to revise nomenclature at this point.
Jason Conrad wrote:When you start talking about carrier signals and modulation, you're talking about two signals traveling over the same connection. I *suspect* a graph theorist writes and thinks about this as two different connections between vertices, but an audio engineer thinks about it as a single, physical signal path -- in a way. I mean, I know the audio engineer *understands* that there are two signals, implicitly, but I suspect that the language he uses is biased towards physicality, if that makes sense.
Carrier and modulation results in one signal, but I think it might be productive to think about it as being an encoded signal. The modulation is what is encoded into the carrier. You then decode at the receiving end.
But that's probably a bit off topic.