- Posts: 3033
- Joined: Wed Aug 22, 2012 6:53 am
- Location: Estonia
Based purely on my own intuition and experience (which, of course, might be wildly inaccurate), evaluation begins with the tool that is asked to render (either a Saver or a tool put into the Viewer). For each input, this tool generates a request that is passed upstream to the output it is connected to. That tool then generates requests for each of its own inputs, and so forth until a Loader or generator tool is found. That tool calculates its output(s) and passes a raster, along with any auxiliary data, to the tool that requested it. That tool performs its operation, and on back down the chain until the tool that created the original request can render its own output.
In the case of concatenating Transforms, I assume that rather than passing along the transformed pixels, upstream tools instead pass their image input directly to the output along with the transform matrix. When the first non-Transform node is reached, the tool at the end of the concatenating chain applies the matrix and calculates the resultant image.
Regarding the concatenation of the Background's transform through a Merge, here is a small test graph that I used to verify the assertion for myself:
- Code: Select all
{
Tools = ordered() {
Transform1 = Transform {
Inputs = {
Center = Input { Value = { 0.500130208333333, 0.5 }, },
Input = Input {
SourceOp = "BrightnessContrast1",
Source = "Output",
},
},
ViewInfo = OperatorInfo { Pos = { 674, 216 } },
},
BrightnessContrast1 = BrightnessContrast {
Inputs = {
ClipBlack = Input { Value = 1, },
ClipWhite = Input { Value = 1, },
Input = Input {
SourceOp = "FastNoise1",
Source = "Output",
},
},
ViewInfo = OperatorInfo { Pos = { 564, 216 } },
},
FastNoise1 = FastNoise {
Inputs = {
Width = Input { Value = 1920, },
Height = Input { Value = 1080, },
["Gamut.SLogVersion"] = Input { Value = FuID { "SLog2" }, },
Detail = Input { Value = 5, },
Contrast = Input { Value = 512, },
XScale = Input { Value = 8.63, },
Color1Alpha = Input { Value = 1, },
},
ViewInfo = OperatorInfo { Pos = { 454, 216 } },
},
Text1 = TextPlus {
CtrlWZoom = false,
Inputs = {
Width = Input { Value = 1920, },
Height = Input { Value = 1080, },
["Gamut.SLogVersion"] = Input { Value = FuID { "SLog2" }, },
Font = Input { Value = "Open Sans", },
StyledText = Input { Value = "text", },
Style = Input { Value = "Bold", },
ManualFontKerningPlacement = Input {
Value = StyledText {
Array = {
},
Value = ""
},
},
},
ViewInfo = OperatorInfo { Pos = { 782, 140 } },
},
Merge1 = Merge {
Inputs = {
Background = Input {
SourceOp = "Transform1",
Source = "Output",
},
Foreground = Input {
SourceOp = "Text1",
Source = "Output",
},
PerformDepthMerge = Input { Value = 0, },
},
ViewInfo = OperatorInfo { Pos = { 784, 216 } },
},
Transform1_1 = Transform {
Inputs = {
Center = Input { Value = { 0.500130208333333, 0.5 }, },
InvertTransform = Input { Value = 1, },
Input = Input {
SourceOp = "Merge1",
Source = "Output",
},
},
ViewInfo = OperatorInfo { Pos = { 954, 216 } },
},
BrightnessContrast2 = BrightnessContrast {
Inputs = {
Gamma = Input { Value = 5, },
Input = Input {
SourceOp = "Merge2",
Source = "Output",
},
},
ViewInfo = OperatorInfo { Pos = { 993, 322 } },
},
Merge2 = Merge {
Inputs = {
Background = Input {
SourceOp = "BrightnessContrast1",
Source = "Output",
},
Foreground = Input {
SourceOp = "Transform1_1",
Source = "Output",
},
ApplyMode = Input { Value = FuID { "Difference" }, },
PerformDepthMerge = Input { Value = 0, },
},
ViewInfo = OperatorInfo { Pos = { 993, 289 } },
}
}
}
Zoomed into 400%, I can see after the first Transform that there has definitely been some filtering that's softened the edge of my thresholded noise:
After the Merge, I used the same Transform with the Invert Transform switch checked and see that the filtering no longer exists, indicating that the background is no longer being filtered (although my Text tool is, as it receives only the second Transformation):
To prove that Fusion is, indeed, doing what I think it's doing, I'll add a Color Correct in between the Merge and the second Transform to break concatenation:
As you can see, the softening is back.
Now again, I don't have any inside knowledge of how Fusion's evaluation works, but I can offer a hypothesis:
Transform1_1 (the inverted copy) generates a request. The request includes a tag that indicates "I'm a Transform; please concatenate with me!" Each upstream tool that is also a Transform, including the Merge, passes this tag along with its own request. As the evaluation comes back down the chain, the tool with a concatenate tag passes its inputs to its output, along with its transformation matrix. When we get to the Merge, instead of performing its function, it sees that it has a concat tag in the request, and it passes along two transform matrices and two images, with additional information about the Merge operation that is to be called at the end of the chain. Transform1_1 receives this information and, after evaluating the matrix for the BG image, spawns a new Transform operator that evaluates the FG matrix, and finally spawns a new Merge operator that calculates the image and returns the resultant image.
So that's one possible method by which the concatenation could happen within the limits of the Fusion DAG, as I understand it. But, as I said, this is all purely theoretical, though an interesting and fun thought experiment.