Jump to: Board index » General » Fusion

PSA: "This thing I made has dark edges around it!"

Learn about 3D compositing, animation, broadcast design and VFX workflows.
  • Author
  • Message
Offline

Snadegg

  • Posts: 29
  • Joined: Sat Dec 17, 2022 12:08 am
  • Real Name: Xander Wanlass

PSA: "This thing I made has dark edges around it!"

PostSun Mar 05, 2023 1:52 am

I've seen a lot of posts complaining about this issue and I myself have been confused about this in the past, so I thought I'd make a post to explain it all in one place and provide a few resources since I think it’d be helpful.

Have you masked or keyed something, only to discover after merging it that there’s an ugly dark ring around it? Well (as far as I know) there are three major reasons this could be happening:
  1. Alpha premultiplication
  2. Working with nonlinear color
  3. Spill suppression
So I’m going to work through these three and give an overview of what each of them means for you.
  1. Alpha premultiplication. This is a subject I want to take a deeper dive into in a video. Too often, the alpha channel gets simplified into the “transparency channel,” but there’s a reason that it’s called alpha; it’s a mathematical coefficient. Credit to Bryan Ray for this, but when you merge something in Fusion, for each color channel it takes the background, multiplies it by one minus the foreground’s alpha, and then adds the foreground.

    Cbg * (1-αfg) + Cfg

    But wait, all you did was add the foreground colors! In nonwhite areas, the amount you reduced in the background will not balance out the foreground values you're adding.

    Bad_Comp.jpg
    Merging an unpremultiplied image results in above-one values
    Bad_Comp.jpg (44.28 KiB) Viewed 1629 times

    This is because Fusion's merge nodes anticipate pre-multiplied inputs, meaning that the RGB color values have been multiplied by the alpha. This solves the issue of brightening nonwhite areas. and this is what masks do by default.

    The problem comes if you perform any color modifications on the image. Since the RGB values have been darkened at the edges of masked images, your color modifications will not behave correctly. This is what that "pre-divide/post-multiply" fixes. The image is first divided by its alpha, which to a certain extent recovers the original colors (depending on bit depth and how close to 0 the alpha is at each point), then it's modified by the effect your node is computing, then it's multiplied by alpha again. The problem comes if you accidentally multiply an image that is already premultiplied. You're multiplying the color values by alpha again, which is darkening semitransparent edges.
    Double_Multiplied.jpg
    Merging an image that has been multiplied by alpha twice.
    Double_Multiplied.jpg (52.46 KiB) Viewed 1629 times

    To prevent this, whenever you're making color changes to a premultiplied image, always pre-divide, then post-multiply. This includes gamut nodes, blurs, curves, etc. If the nodes you're using don't have this option, Fusion has the "alpha divide" and "alpha multiply" nodes at your disposal. Just use alpha divide beforehand, then make your color changes, and then alpha multiply afterward.

  2. Working with nonlinear color. This is a really complex topic that I still don't fully comprehend, but to understand the fundamentals I personally recommend this video.

    The important thing to understand is that regardless of what your working color space is in Fusion, your monitor is going to add an inverse gamma curve to the image. This means that for any values between the minimum and maximum, a display is going to darken them according to its gamma curve. For instance, take a look at this [Red] dot over this green background.
    LinearizedGreenRed.png
    On the left, a feathered red dot is merged straight away. On the right, I removed the gamma from the red dot and green bg, then added it back at the end. Using sRGB.
    (Note: in this case it's technically not necessary to de-gamma the backgrounds since they're 100% red and green respectively. I just said that to show the process.)
    LinearizedGreenRed.png (694.36 KiB) Viewed 977 times

    Notice the dark ring around the [Red] dot? As Fusion interpolates between [Red] and green, it does so linearly. But again, your monitor isn't designed to accept a linear input, so it's darkening intermediate values unnecessarily. How do you fix this? Well, you need to work in linear gamma and then use a gamut node (or color space transform in Resolve) to add back the gamma curve that you removed. Here's a minutephysics video on this subject.

    This is an area that is severely lacking in Fusion tutorials and it's something I had to figure out myself. If an image you generated in Fusion is intended to be seen in the final result, you need to treat that image just like you would treat your footage.
    Code: Select all
    Edit: This is incorrect for reasons I'm going to explain later, but basically this works for solid colors, not for gradients or complex backgrounds.
    Watch VFXstudy's excellent explainer on working in linear here.

    (Fun fact! I'm pretty sure Resolve's default transitions have a dark trail when motion blur is enabled as a result of them not taking gamma into account!)

    If I'm not mistaken, the technically correct way to do this would be to just add the gamma curve after you've done all of your work. The reason I say this is technically correct is that these nodes are inherently linear in nature, so you should really only be adding a curve. However, if you do this the preview color in the inspector won't match up with the output. For that reason, I believe it's more useful to remove the curve from your generator nodes before compositing, then add the curve afterwards. Just like regular footage. If any part of that statement is wrong, I would really love to be corrected.
    Code: Select all
    Edit: This technique works fine for solid backgrounds and it's a technique I will continue to employ for this purpose, but unfortunately there's no good way to do this in a gradient since your gamut nodes will affect any color between minimum and maximum, not just the interpolated colors in a gradient node.
    TL;DR: With gradients you can only really add the gamma to fix the color blending.


    Also side note: I am severely struggling to understand color management and have reached out to a few different colorists and even BMD support to try and get some answers to my biggest questions to no avail. If anyone knows a good person to contact and ask, please let me know.

    Code: Select all
    Edit: I found this reply incredibly helpful.

    Hendrik Proosa wrote:Generators by definition produce data in working space, because they produce numbers. There is no pre-determined interpretation for those numbers, so they are effectively expressing light values in working space. Meaning there is no curve to remove from them, unless it is desirable to somehow bend the data ”for reasons”. For example a ramp generator that produces 0.0-1.0 linear ramp (where values increment linearly, middle of ramp is 0.5), this ramp expresses linear-light ramp in linearized working space and logarithmic (nonlinear in linear-light domain) ramp in lets say ACEScc working space, because the intepretation of generated values depends on working space. Adding a colorspace transform after ramp applies custom explicit interpretation from user: ”take this ramp as if it were X and convert it to Y”. And if Y doesn’t align with working space, it adds another layer of ”and take this Y as if it were actually working space values”. Sounds a bit confusing maybe but concept is simple: data does not have colorspace by itself, colorspace is part of metadata for interpreting the data that can be juggled as one wishes.


  3. Spill suppression. If you've keyed out your footage, there's a good chance you've applied some form of spill suppression. The problem with this is that it could be removing the green/blue without adding anything back to replace it which will darken your image overall, especially at the edges.

    Unfortunately I can't really find a good free resource for this, all I can do is recommend VFXstudy's "Compositing with DaVinci Resolve & Fusion" course. Specifically, lesson 6.9 "Despill and Spill Color Replacement." Use a Channel Booleans node and subtract the RGB values of the de-spilled image from the untouched image. Then use this output as a matte to add back some "spill" of your choice.
Code: Select all
Edit: the following are a couple of responses from Bryan Ray and Hendrik Proosa

Bryan Ray wrote: This is most likely to be a problem for green spill, as something like 70% of the image's luminance comes from the green channel. If it's critical to maintain the image's luminance (it may not be if you're performing color corrections on the foreground to integrate with a new background), here's the procedure I learned:

Perform a Difference Merge of the original image and the despilled version. Desaturate the result completely. Add that back to the despilled image. That should restore the luminance to its original levels. This should be more accurate than eyeballing it with a Color Corrector.


Discussing desaturation methods:

Hendrik Proosa wrote:Exact coefficients are irrelevant, idea is to just add back some of the ambient environment light and reflected light intensity that gets removed by despill. You can take just the green (or blue) too and shuffle it to rgb, scale it to make stronger etc.



I hope this post is helpful to anyone searching for answers in the future. Fusion's greatest strength and greatest weakness is that it forces you to think about every step of image processing that goes into a final composite. That's why I love it as a learning playground.

If any part of this is incorrect or misleading, please feel free to reply and I will edit the post accordingly.
Last edited by Snadegg on Tue Mar 14, 2023 3:48 am, edited 4 times in total.
Windows 11, RTX 3070, Ryzen 5900x, 96 GB @ 3200MHz
Offline

Hendrik Proosa

  • Posts: 3037
  • Joined: Wed Aug 22, 2012 6:53 am
  • Location: Estonia

Re: PSA: "This thing I made has dark edges around it!"

PostSun Mar 05, 2023 8:55 am

Nice post!

Two small things I would add. Comping in nonlinear space does not directly introduce ugly edge artifacts. For example, ages old workflow in AE has been noncolormanaged and people comp directly in output space (sRGB etc). Since they operate without any additional colorspace transforms being applied afterwards, merging sRGB gamma corrected data applies linear intepolation from FG to BG with relatively natural looking result. It is different than when done in linear domain, but passable (people don’t complain that much). Is it preferrable.. no, if there is a choice.

Second is about generators. Generators by definition produce data in working space, because they produce numbers. There is no pre-determined interpretation for those numbers, so they are effectively expressing light values in working space. Meaning there is no curve to remove from them, unless it is desirable to somehow bend the data ”for reasons”. For example a ramp generator that produces 0.0-1.0 linear ramp (where values increment linearly, middle of ramp is 0.5), this ramp expresses linear-light ramp in linearized working space and logarithmic (nonlinear in linear-light domain) ramp in lets say ACEScc working space, because the intepretation of generated values depends on working space. Adding a colorspace transform after ramp applies custom explicit interpretation from user: ”take this ramp as if it were X and convert it to Y”. And if Y doesn’t align with working space, it adds another layer of ”and take this Y as if it were actually working space values”. Sounds s bit confusing maybe but concept is simple: data does not have colorspace by itself, colorspace is part of metadata for interpreting the data that can be juggled as one wishes.
I do stuff
Offline
User avatar

Bryan Ray

  • Posts: 2488
  • Joined: Mon Nov 28, 2016 5:32 am
  • Location: Los Angeles, CA, USA

Re: PSA: "This thing I made has dark edges around it!"

PostSun Mar 05, 2023 7:40 pm

Snadegg wrote:
[*]Spill suppression. If you've keyed out your footage, there's a good chance you've applied some form of spill suppression. The problem with this is that it could be removing the green/blue without adding anything back to replace it which will darken your image overall, especially at the edges.


This is most likely to be a problem for green spill, as something like 70% of the image's luminance comes from the green channel. If it's critical to maintain the image's luminance (it may not be if you're performing color corrections on the foreground to integrate with a new background), here's the procedure I learned:

Perform a Difference Merge of the original image and the despilled version. Desaturate the result completely. Add that back to the despilled image. That should restore the luminance to its original levels. This should be more accurate than eyeballing it with a Color Corrector.
Bryan Ray
http://www.bryanray.name
http://www.sidefx.com
Offline

Hendrik Proosa

  • Posts: 3037
  • Joined: Wed Aug 22, 2012 6:53 am
  • Location: Estonia

Re: PSA: "This thing I made has dark edges around it!"

PostSun Mar 05, 2023 7:54 pm

Yes, adding desaturated difference of original and despilled back is the simplest good method. Grading this add-back or multing with (blurred) bg allows replacing spill with scene coloration.
I do stuff
Offline

Snadegg

  • Posts: 29
  • Joined: Sat Dec 17, 2022 12:08 am
  • Real Name: Xander Wanlass

Re: PSA: "This thing I made has dark edges around it!"

PostSat Mar 11, 2023 8:59 pm

Sorry for the delayed response! I started a new job this week

Hendrik Proosa wrote:Comping in nonlinear space does not directly introduce ugly edge artifacts.


You're right, the fix I suggested was following the advice of the minutephysics video I linked. Here's an image to demonstrate color blending with and without the linearizing step applied.

LinearizedGreenRed.png
On the left, just merged straight away. On the right, I linearized both backgrounds and then added back the sRGB gamma curve. There's a big difference!
LinearizedGreenRed.png (694.36 KiB) Viewed 1209 times


The reason I say the technically correct way is probably just to add the curve without removing it first is because of what you said about generators inherently being linear. I'm not concerned about the chromaticity, since as you said it's going to follow whatever working space you're in because they're just RGB values 0-1. I'm concerned only with the gamma curve. But if you only add the gamma curve without removing it, the colors in the image won't match up with the color preview in the generator node. So, I think it's more useful to linearize generator nodes and then add the gamma after doing your compositing, even if it's technically wrong.

Would love to hear anything you have to add! Thanks for the reply :)

Edit: Also, thank you so much for explaining how color is "interpreted." That reaffirmed something that's taken me months to understand. A major part of the confusion for me is that Resolve calls it a "color space transform," which is an apt description once you understand what's going on, but can be really misleading when you're starting out.
Last edited by Snadegg on Sun Mar 12, 2023 7:02 am, edited 1 time in total.
Windows 11, RTX 3070, Ryzen 5900x, 96 GB @ 3200MHz
Offline

Snadegg

  • Posts: 29
  • Joined: Sat Dec 17, 2022 12:08 am
  • Real Name: Xander Wanlass

Re: PSA: "This thing I made has dark edges around it!"

PostSun Mar 12, 2023 6:40 am

Snadegg wrote:So, I think it's more useful to linearize generator nodes and then add the gamma after doing your compositing, even if it's technically wrong.


This trick doesn't really work for gradient backgrounds unfortunately. I'd really hoped that using the "source gamma space" section would somehow do something different, but it did not. It'd be nice to have the preview match what you're going to have the source space be, but it looks like it is what it is here.

So I guess the correct way is correct for a reason, but it works great for merging backgrounds that are a solid color. Anyway, that's all I have to say about that unless someone else has something to add.

Bryan Ray wrote:Perform a Difference Merge of the original image and the despilled version. Desaturate the result completely. Add that back to the despilled image. That should restore the luminance to its original levels. This should be more accurate than eyeballing it with a Color Corrector.


The legend himself, thanks for this! Really useful information. I was going to ask about whether it'd be better to use a color space node to convert to B&W instead, but I did some testing and found out that Fusion's color corrector actually respects the original image's luminance, unlike the HSL effect found in something like after effects. Super neat!
Windows 11, RTX 3070, Ryzen 5900x, 96 GB @ 3200MHz
Offline
User avatar

Bryan Ray

  • Posts: 2488
  • Joined: Mon Nov 28, 2016 5:32 am
  • Location: Los Angeles, CA, USA

Re: PSA: "This thing I made has dark edges around it!"

PostSun Mar 12, 2023 6:55 pm

As long as you're using Rec709/sRGB primaries, the desat in Color Corrector or BrightnessContrast (a lighter weight tool that may be preferable for this purpose) should work. If you're in an ACES pipeline, the luminance coefficients are probably different, and you may need to convert to 709 before desaturating. I've not tested this, though, as I have very rarely worked on shots where it was crucial to preserve the greenscreen plate's luma.
Bryan Ray
http://www.bryanray.name
http://www.sidefx.com
Offline

Snadegg

  • Posts: 29
  • Joined: Sat Dec 17, 2022 12:08 am
  • Real Name: Xander Wanlass

Re: PSA: "This thing I made has dark edges around it!"

PostSun Mar 12, 2023 7:41 pm

Bryan Ray wrote:As long as you're using Rec709/sRGB primaries, the desat in Color Corrector or BrightnessContrast (a lighter weight tool that may be preferable for this purpose) should work. If you're in an ACES pipeline, the luminance coefficients are probably different, and you may need to convert to 709 before desaturating. I've not tested this, though, as I have very rarely worked on shots where it was crucial to preserve the greenscreen plate's luma.


I believe fusion is technically using BT.601 matrix coefficients (0.299, 0.587, 0.114) rather than BT.709 (0.2126, 0.7152, 0.0721). So it may or may not be better to use the color space node since it allows you to choose the balance, but I wouldn’t think it really matters for the vast majority of comps. But admittedly, I’m not really sure where these coefficients play into color spaces aside from conversion into Y’CbCr. Another question for my ever growing list on the subject of color management.

Thanks for your time! I’ll add your technique to the post.
Windows 11, RTX 3070, Ryzen 5900x, 96 GB @ 3200MHz
Offline

Hendrik Proosa

  • Posts: 3037
  • Joined: Wed Aug 22, 2012 6:53 am
  • Location: Estonia

Re: PSA: "This thing I made has dark edges around it!"

PostMon Mar 13, 2023 7:05 am

Exact coefficients are irrelevant, idea is to just add back some of the ambient environment light and reflected light intensity that gets removed by despill. You can take just the green (or blue) too and shuffle it to rgb, scale it to make stronger etc.
I do stuff
Offline

Snadegg

  • Posts: 29
  • Joined: Sat Dec 17, 2022 12:08 am
  • Real Name: Xander Wanlass

Re: PSA: "This thing I made has dark edges around it!"

PostTue Mar 14, 2023 3:39 am

Hendrik Proosa wrote:Exact coefficients are irrelevant, idea is to just add back some of the ambient environment light and reflected light intensity that gets removed by despill.


Thanks for the insight! I will be adding Bryan's and your responses to the main post. I've learned a lot from this thread, y'all.
Windows 11, RTX 3070, Ryzen 5900x, 96 GB @ 3200MHz

Return to Fusion

Who is online

Users browsing this forum: No registered users and 20 guests