Jump to: Board index » General » Fusion

Is Fusion graph evaluation not directed?

Learn about 3D compositing, animation, broadcast design and VFX workflows.
  • Author
  • Message
Offline

Hendrik Proosa

  • Posts: 3015
  • Joined: Wed Aug 22, 2012 6:53 am
  • Location: Estonia

Is Fusion graph evaluation not directed?

PostTue Jul 09, 2019 8:43 am

From one recent thread I found a curious thing, that merge node does not break transform concatenation. Inspired from this, and not being able to reason how it is possible in a directed graph where, by definition, node result is the product of its inputs only (nodes downstream must not change upstream behavior) I made some tests to find out what is actually happening. I will prep the test results as a quiz show soon, because I found quite an interesting behaviors, but as a warmup, an introductory question:

Is it outlined somewhere in Fu docs, how the node graph evaluation is actually done technically? Meaning, how does the actual data flow and evaluation order happen? For some time I have had an eerie feeling every time I try to do something in Fu and I think it is because it isn't actually operating in a consistent DAG manner.
I do stuff.
Offline
User avatar

Bryan Ray

  • Posts: 2478
  • Joined: Mon Nov 28, 2016 5:32 am
  • Location: Los Angeles, CA, USA

Re: Is Fusion graph evaluation not directed?

PostTue Jul 09, 2019 2:44 pm

As far as I know, it's not described in any user-facing documentation. There may be something in the SDK that describes how the request works, but if so, I haven't dug deeply enough to run across it.

Based purely on my own intuition and experience (which, of course, might be wildly inaccurate), evaluation begins with the tool that is asked to render (either a Saver or a tool put into the Viewer). For each input, this tool generates a request that is passed upstream to the output it is connected to. That tool then generates requests for each of its own inputs, and so forth until a Loader or generator tool is found. That tool calculates its output(s) and passes a raster, along with any auxiliary data, to the tool that requested it. That tool performs its operation, and on back down the chain until the tool that created the original request can render its own output.

In the case of concatenating Transforms, I assume that rather than passing along the transformed pixels, upstream tools instead pass their image input directly to the output along with the transform matrix. When the first non-Transform node is reached, the tool at the end of the concatenating chain applies the matrix and calculates the resultant image.

Regarding the concatenation of the Background's transform through a Merge, here is a small test graph that I used to verify the assertion for myself:

Code: Select all
{
   Tools = ordered() {
      Transform1 = Transform {
         Inputs = {
            Center = Input { Value = { 0.500130208333333, 0.5 }, },
            Input = Input {
               SourceOp = "BrightnessContrast1",
               Source = "Output",
            },
         },
         ViewInfo = OperatorInfo { Pos = { 674, 216 } },
      },
      BrightnessContrast1 = BrightnessContrast {
         Inputs = {
            ClipBlack = Input { Value = 1, },
            ClipWhite = Input { Value = 1, },
            Input = Input {
               SourceOp = "FastNoise1",
               Source = "Output",
            },
         },
         ViewInfo = OperatorInfo { Pos = { 564, 216 } },
      },
      FastNoise1 = FastNoise {
         Inputs = {
            Width = Input { Value = 1920, },
            Height = Input { Value = 1080, },
            ["Gamut.SLogVersion"] = Input { Value = FuID { "SLog2" }, },
            Detail = Input { Value = 5, },
            Contrast = Input { Value = 512, },
            XScale = Input { Value = 8.63, },
            Color1Alpha = Input { Value = 1, },
         },
         ViewInfo = OperatorInfo { Pos = { 454, 216 } },
      },
      Text1 = TextPlus {
         CtrlWZoom = false,
         Inputs = {
            Width = Input { Value = 1920, },
            Height = Input { Value = 1080, },
            ["Gamut.SLogVersion"] = Input { Value = FuID { "SLog2" }, },
            Font = Input { Value = "Open Sans", },
            StyledText = Input { Value = "text", },
            Style = Input { Value = "Bold", },
            ManualFontKerningPlacement = Input {
               Value = StyledText {
                  Array = {
                  },
                  Value = ""
               },
            },
         },
         ViewInfo = OperatorInfo { Pos = { 782, 140 } },
      },
      Merge1 = Merge {
         Inputs = {
            Background = Input {
               SourceOp = "Transform1",
               Source = "Output",
            },
            Foreground = Input {
               SourceOp = "Text1",
               Source = "Output",
            },
            PerformDepthMerge = Input { Value = 0, },
         },
         ViewInfo = OperatorInfo { Pos = { 784, 216 } },
      },
      Transform1_1 = Transform {
         Inputs = {
            Center = Input { Value = { 0.500130208333333, 0.5 }, },
            InvertTransform = Input { Value = 1, },
            Input = Input {
               SourceOp = "Merge1",
               Source = "Output",
            },
         },
         ViewInfo = OperatorInfo { Pos = { 954, 216 } },
      },
      BrightnessContrast2 = BrightnessContrast {
         Inputs = {
            Gamma = Input { Value = 5, },
            Input = Input {
               SourceOp = "Merge2",
               Source = "Output",
            },
         },
         ViewInfo = OperatorInfo { Pos = { 993, 322 } },
      },
      Merge2 = Merge {
         Inputs = {
            Background = Input {
               SourceOp = "BrightnessContrast1",
               Source = "Output",
            },
            Foreground = Input {
               SourceOp = "Transform1_1",
               Source = "Output",
            },
            ApplyMode = Input { Value = FuID { "Difference" }, },
            PerformDepthMerge = Input { Value = 0, },
         },
         ViewInfo = OperatorInfo { Pos = { 993, 289 } },
      }
   }
}


Zoomed into 400%, I can see after the first Transform that there has definitely been some filtering that's softened the edge of my thresholded noise:

concat1.jpg
concat1.jpg (58.51 KiB) Viewed 2843 times


After the Merge, I used the same Transform with the Invert Transform switch checked and see that the filtering no longer exists, indicating that the background is no longer being filtered (although my Text tool is, as it receives only the second Transformation):

concat2.jpg
concat2.jpg (57.21 KiB) Viewed 2843 times


To prove that Fusion is, indeed, doing what I think it's doing, I'll add a Color Correct in between the Merge and the second Transform to break concatenation:

concat3.jpg
concat3.jpg (69.85 KiB) Viewed 2843 times


As you can see, the softening is back.

Now again, I don't have any inside knowledge of how Fusion's evaluation works, but I can offer a hypothesis:

Transform1_1 (the inverted copy) generates a request. The request includes a tag that indicates "I'm a Transform; please concatenate with me!" Each upstream tool that is also a Transform, including the Merge, passes this tag along with its own request. As the evaluation comes back down the chain, the tool with a concatenate tag passes its inputs to its output, along with its transformation matrix. When we get to the Merge, instead of performing its function, it sees that it has a concat tag in the request, and it passes along two transform matrices and two images, with additional information about the Merge operation that is to be called at the end of the chain. Transform1_1 receives this information and, after evaluating the matrix for the BG image, spawns a new Transform operator that evaluates the FG matrix, and finally spawns a new Merge operator that calculates the image and returns the resultant image.

So that's one possible method by which the concatenation could happen within the limits of the Fusion DAG, as I understand it. But, as I said, this is all purely theoretical, though an interesting and fun thought experiment.
Bryan Ray
http://www.bryanray.name
http://www.sidefx.com
Offline

Hendrik Proosa

  • Posts: 3015
  • Joined: Wed Aug 22, 2012 6:53 am
  • Location: Estonia

Re: Is Fusion graph evaluation not directed?

PostTue Jul 09, 2019 4:48 pm

Thank you Bryan for very thoughtful comment! What you describe is what I would presume happens in normal DAG evaluation and afaik this is exactly how it is happening in Nuke for example. Format (domain, channel etc) requests are propagated upstream and results are passed downstream. Transform matrix is passed downstream and combined until a node that breaks concat is hit and image is rasterized using final matrix.

Merge node being transform aware is somewhat understandable, if taken as a product of inverted transform processing (first transform node is the rasterizer, not last, as in Nuke). In this case merge would actually comp together both inputs in their final, transformed state, that is partially product of transform after the merge. And rasterization to that final state would happen in first transform, before the merge. I doubt merge node could pass merge operation as is downstream for last transform to do, that would mean transform tool should also contain whole merge tool code which doesn't sound plausible.

But in Fusion I see some practical effects that seem to defy this straightforward logic. I'll be behind desk again tomorrow and will prep some test images that demonstrate those curious things. It seems to be related to how Fu compositing engine itself is built, which differs considerably from nuke as far as I know.
I do stuff.
Offline
User avatar

Bryan Ray

  • Posts: 2478
  • Joined: Mon Nov 28, 2016 5:32 am
  • Location: Los Angeles, CA, USA

Re: Is Fusion graph evaluation not directed?

PostTue Jul 09, 2019 5:15 pm

Looking forward to seeing your tests!
Bryan Ray
http://www.bryanray.name
http://www.sidefx.com
Offline
User avatar

Chad Capeland

  • Posts: 3017
  • Joined: Mon Nov 10, 2014 9:40 pm

Re: Is Fusion graph evaluation not directed?

PostTue Jul 09, 2019 6:44 pm

The evaluation of the tools isn't directed at all, and isn't even deterministic (not sure if that's 100% related or not). Other applications like Nuke and Shake (and maybe other scanline rendering tools) run up the graph for each rendered segment, but Fusion doesn't, but it does check for dependencies and loops when inputs are connected (including when settings are loaded). That allows you to build your graph with completely broken or null tools. The tools themselves have no say in how the connections are limited.

This leads to some odd situations, like you can connect the output of a macro to the input of the same macro, but you can't insert a piperouter in the middle. ;)
Chad Capeland
Indicated, LLC
www.floweffects.com
Offline

Hendrik Proosa

  • Posts: 3015
  • Joined: Wed Aug 22, 2012 6:53 am
  • Location: Estonia

Re: Is Fusion graph evaluation not directed?

PostWed Jul 10, 2019 10:39 am

This explains it :roll: By nondeterministic you mean that same comp does not produce always same output I presume?

And for little entertainment, my quiz show. Please try to guess before trying it out, comper should know the consequences of his choices, doesn't he :D

We have a graph that looks like this:
CropperCapture[356]_t.jpg
CropperCapture[356]_t.jpg (44.71 KiB) Viewed 2723 times

A series of transforms, where Ta, Tb, Tab and Tcd scale and rotate the source image, merges merge them together, T1 and T2 scale them back to original size.

Original image viewed through T2 with filter method set to linear in all transforms and merges looks like this:
CropperCapture[357]_t.jpg
CropperCapture[357]_t.jpg (302.7 KiB) Viewed 2723 times


Question1: which images (A, B, C, D or combination of them) change when filter is set to nearest neighbor in following node, while viewing through T2:
Ta
Tb
M1
T1
Tab
Tcd
M2
T2

Answers could be like: Ta - AB; Tb - C etc.

Question2: which images (A, B or combination of them) changes when filter is set to nearest neighbor in following node, while viewing through T1:
Ta
Tb
M1
T1
I do stuff.
Offline
User avatar

Chad Capeland

  • Posts: 3017
  • Joined: Mon Nov 10, 2014 9:40 pm

Re: Is Fusion graph evaluation not directed?

PostWed Jul 10, 2019 6:47 pm

Hendrik Proosa wrote:This explains it :roll: By nondeterministic you mean that same comp does not produce always same output I presume?


Not intentionally. :D

But Fusion doesn't say "Oh, so USER wants a frame, let's see... we'll queue up these tools to process in this order so we get a certain result". Instead the order in which requests get queued is, from our perspective, random. If ToolA and ToolB are both rendering simultaneously, if ToolA gets done 1ms earlier than ToolB, the tool that is requesting ToolA's output will render next. If ToolA gets done 1ms later, then the tool that is requesting ToolB's output is rendered next. There may be consequences for this, like in the case of masks or some badly written scripts that set variables in ways that the user did not anticipate.

There's of course floating point nondeterminism, where A+B+C does not equal B+C+A. But that's usually quite minor in comparison to say, a mask that renders at the wrong resolution on one frame because it was requested in a different order.

How that affects transforms, however, I haven't thought much about.
Chad Capeland
Indicated, LLC
www.floweffects.com
Offline
User avatar

Igor Riđanović

  • Posts: 1596
  • Joined: Thu Jul 02, 2015 5:11 am
  • Location: Los Angeles, Calif.

Re: Is Fusion graph evaluation not directed?

PostWed Jul 10, 2019 9:28 pm

Fascinating! I was unaware of this.
www.metafide.com - DaVinci Resolve™ Apps
Offline
User avatar

Peter Loveday

Blackmagic Design

  • Posts: 24
  • Joined: Tue Sep 30, 2014 6:23 am

Re: Is Fusion graph evaluation not directed?

PostThu Jul 11, 2019 2:54 pm

I'll attempt to lend some insight where I can... but this all becomes a little bit hypothetical, as more or less no software really applies to a DAG in pure mathematical graph theory.

So if we're talking pure terms, then no, there is no part that can be considered 100% directed. A given node requires a certain time (let's say TimeSpeed), then that affects the resault of an upstream node. This breaks the one-way data contract already. The upstream node is producing a differing result based on the downstream node. If I view a node, then view its downstream TimeSpeed, I might see totally differing results, even though the ultimate result is the same.
This is basically the same as what happens when viewing intermediate results of concatenated transforms. It's not really that different, if you view something before a downstream tool has the option to override upstream behaviour, it'll be different than viewing the downstream tool.
Equally DoD/ROI can do the same; a downstream node can alter an upstream result, again breaking that contract.

So if we break away from that pure definition (that is pretty much never used in software), there are a couple of other cases that come to mind where things are a bit different in Fusion.

1) Mask processing. Masks size themselves to their target node, hence having dependencies on the resolution of what they affect.
2) Particles. This is definitely not Acyclic. It's cyclic by design, in fact... the node layout might be linear(ish), but the evaluation is definitely not.

Aside from these, and without taking an overly pure definition, it is pretty much a DAG. Chad's evaluation of nondeterminism in order of operations is valid, of course, but DAGs do not guarantee that... any differences due to order could be considered bugs, or at the least, less than ideal implementations.

As an aside, on the transform/merge concatenation... No the first node does not evaluate the concatenate. In fact no node does. The entire chain is part of the TransformMatrix datatype parameter. That is to say, Transform, Merge, Tracker etc simply add their particular operation to the data passed down, just like other nodes add theirs to an image. At some point, the overall transform is applied; but that's up the the TransformMatrix. The nodes need not understand the overall complexities.

I realise this is all fairly broad, but hopefully it assists in understanding, and taking advantage of, the way things operate.
Love, Light and Peace,
- Peter Loveday
Offline
User avatar

Pieter Van Houte

  • Posts: 631
  • Joined: Wed Nov 05, 2014 1:04 am

Re: Is Fusion graph evaluation not directed?

PostThu Jul 11, 2019 8:52 pm

Hendrik Proosa wrote:From one recent thread I found a curious thing, that merge node does not break transform concatenation. Inspired from this, and not being able to reason how it is possible in a directed graph where, by definition, node result is the product of its inputs only (nodes downstream must not change upstream behavior)


Hendrik Proosa wrote:...normal DAG evaluation and afaik this is exactly how it is happening in Nuke for example.


For one thing, this (my emphasis above) equally is not true for Nuke. Simplest case: in a concatenated transform chain, Nuke will change the filtering behaviour of upstream nodes based on a downstream input. I cannot transform with one filter and then do another transform with a different filter without breaking concatenation with another node. At least in Fusion I can control the concatenation chain with the Flatten Transform control (but it would be nice to be able to see in the Flow View where that happens).

Regardless, concatenation through Merge is incredibly useful. You can do one of those 'infinite zoom' setups (you know, zoom into Earth, into a city, to a rooftop, to a pigeon on the roof, to a tick on the pigeon,...) in a mere couple of nodes.

Once you get used to having this as a feature, I find it really hard to go back to an environment where that is not offered. Another one of those is fully concatenating masked Transforms, by the way.
Support We Suck Less on Patreon -> https://www.patreon.com/wesuckless

https://www.steakunderwater.com/wesuckless
Offline

Sander de Regt

  • Posts: 3500
  • Joined: Thu Nov 13, 2014 10:09 pm

Re: Is Fusion graph evaluation not directed?

PostThu Jul 11, 2019 9:03 pm

You can do one of those 'infinite zoom' setups (you know, zoom into Earth, into a city, to a rooftop, to a pigeon on the roof, to a tick on the pigeon,...) in a mere couple of nodes.

I should do one of those as a tutorial to demonstrate concatenation. (if it doesn't already exist)
Sander de Regt

ShadowMaker SdR
The Netherlands
Offline
User avatar

Pieter Van Houte

  • Posts: 631
  • Joined: Wed Nov 05, 2014 1:04 am

Re: Is Fusion graph evaluation not directed?

PostThu Jul 11, 2019 9:10 pm

You absolutely should :)
Support We Suck Less on Patreon -> https://www.patreon.com/wesuckless

https://www.steakunderwater.com/wesuckless
Offline

Hendrik Proosa

  • Posts: 3015
  • Joined: Wed Aug 22, 2012 6:53 am
  • Location: Estonia

Re: Is Fusion graph evaluation not directed?

PostFri Jul 12, 2019 7:13 am

Some very good information here, thank you all!

I'm looking at the graph evaluation from a practical perspective, so true that I'm probably not after mathematically pure DAG. My interest lies in the actual implementation and usage scenarios. Also true that Nuke transform concatenation also does not lend itself to this, mostly visible in filtering and motion blur handling through transform nodes. The thing that makes me scratch my head is predicting what will happen if I do something, and vice versa, what do I have to do to make something happen. For Nuke there is a pretty elaborate description about concatenation in Nukepedia, maybe it is of interest here: http://www.nukepedia.com/written-tutori ... ms-in-nuke

Coming back to predicting what will happen when I add up nodes. What I meant by nodes only generating output based on their upstream nodes is this: the node I am viewing through (or saving output from) must create the same image independent of what nodes downstream from it are doing. I'm not sure it is actually the case in Fu, I must do some tests, but I think I have seen situations where this constraint is not followed. Not taking into account any expression linking or clones etc here, which adds another layer of complexity, just pure nodes.

In my transform example, the biggest practical impact for me is that there is no predictable way to set the filtering for whole final image. In Nuke I would change filter and mblur settings in last transform and it would affect everything, in Fu, setting filtering in node T2 (see illustration above) does not do that. Instead it changes filtering on image A only. Not sure anyone would make that guess correctly. Only way to predictably change filtering on final image is to set it in ALL transform and merge nodes, which obviously makes changes a major pita.

I thought a bit about why would node T2 change only image A and with combination of some other nuggets of info from different earlier threads and sources my impression is that Fusion engine is operating in an environment that is somewhat analogous to opengl. Images are considered as separate texture buffers (analogous to textures in opengl) and node graph can change the state of those textures. In such constructed graph as on my illustration, nodes pass information upstream following some set logic (merge passes downstream info to BG, its own settings to FG etc) and it is attached to texture. Transform matrix itself changes the transform matrix of image plane used to render that texture, this also explains the love for relative coordinates. Now, in the case of my test, filter setting is propagating upstream to image A texture buffer and in final rasterizing/rendering stage, that image is sampled using its state (which includes filtering setting) at the moment rasterization happens. Not sure Fu engine is actually using opengl as the basis for its rasterization core, but something very similar is seems to be done. Maybe I'm 100% on wrong tracks, but this is my current impression :)

PS. in that Nukepedia artice about Nuke concat, there is an illustration about MergeMat node not breaking concatenation. This aligns very nicely with what I wrote above as MergeMat also operates on texture buffers.
I do stuff.
Offline

Hendrik Proosa

  • Posts: 3015
  • Joined: Wed Aug 22, 2012 6:53 am
  • Location: Estonia

Re: Is Fusion graph evaluation not directed?

PostFri Jul 12, 2019 7:22 am

My two cents on infinite zoom also. In Nuke, infinite zoom setup needs reordering of nodes. Instead of doing final transform after merge, it must be done for each element before. Those transforms are usually linked together through expressions or cloning, whatever user feels more comfortable with. This adds n-1 new nodes for n elements in zoom setup, not that much.
I do stuff.

Return to Fusion

Who is online

Users browsing this forum: No registered users and 39 guests