Jump to: Board index » General » Fusion

Camera projections with transparency don't render transparen

Learn about 3D compositing, animation, broadcast design and VFX workflows.
  • Author
  • Message
Offline
User avatar

mattmerk

  • Posts: 46
  • Joined: Wed Nov 13, 2013 2:56 am
  • Location: United States

Camera projections with transparency don't render transparen

PostSun Jun 21, 2015 5:07 pm

I'm probably doing something wrong here but I use this camera projection method in another widely used node based compositing app often.

As you can see in my screen capture, I am merely feeding an image with a poly mask defining its transparency to a camera that projects it onto some geometry.

That all gets shot with another camera that can't for some reason see or render the transparency, beyond what gets projected onto the geometry. I need the final output of the projection to adopt the transparency as well.

Any help would be greatly appreciated. Thanks.
Attachments
Transparency projection.PNG
Transparency projection.PNG (240.19 KiB) Viewed 9940 times
Matthew Merkovich
The Matchmoving Company

http://mattmerk.com
https://www.youtube.com/c/MatthewMerkovich
Offline
User avatar

mattmerk

  • Posts: 46
  • Joined: Wed Nov 13, 2013 2:56 am
  • Location: United States

Re: Camera projections with transparency don't render transp

PostSun Jun 21, 2015 5:10 pm

I just tried to upload the comp itself so others could load it but got the message, "The extension .comp is not allowed."

Might be a good idea to allow that file type to be uploaded on this Fusion forum.
Matthew Merkovich
The Matchmoving Company

http://mattmerk.com
https://www.youtube.com/c/MatthewMerkovich
Offline

Gregory Chalenko

  • Posts: 17
  • Joined: Wed Sep 17, 2014 3:38 pm
  • Location: London

Re: Camera projections with transparency don't render transp

PostSun Jun 21, 2015 9:46 pm

You need a UVWMap3D set to the Camera mode to take the projection coordinates from the camera:
Code: Select all
{
   Tools = ordered() {
      Merge3D1 = Merge3D {
         Inputs = {
            ["Transform3DOp.Translate.Z"] = Input { Value = 1.14723612396352, },
            SceneInput1 = Input {
               SourceOp = "Camera3D_Render",
               Source = "Output",
            },
            SceneInput2 = Input {
               SourceOp = "UVMap3D1",
               Source = "Output",
            },
         },
         ViewInfo = OperatorInfo { Pos = { 441, 197.18, }, },
      },
      Renderer3D1 = Renderer3D {
         Inputs = {
            ["RendererOpenGL.TransparencySorting"] = Input { Value = 1, },
            ["RendererOpenGL.Texturing"] = Input { Value = 1, },
            GlobalIn = Input { Value = 1, },
            GlobalOut = Input { Value = 100, },
            Width = Input { Value = 2048, },
            Height = Input { Value = 1556, },
            ["Gamut.SLogVersion"] = Input { Value = FuID { "SLog2", }, },
            SceneInput = Input {
               SourceOp = "Merge3D1",
               Source = "Output",
            },
            CameraSelector = Input { Value = FuID { "Camera3D_Render", }, },
            ["RendererSoftware.Lighting"] = Input { Value = 0, },
         },
         ViewInfo = OperatorInfo { Pos = { 441, 267.155, }, },
      },
      UVMap3D1 = UVMap {
         CtrlWZoom = false,
         Inputs = {
            SceneInput = Input {
               SourceOp = "Cube3D1",
               Source = "Output",
            },
            CameraInput = Input {
               SourceOp = "Camera3D_Projection",
               Source = "Output",
            },
            MapMode = Input { Value = FuID { "Camera", }, },
            CameraSelector = Input { Value = FuID { "Camera3D_Projection", }, },
            RefTime = Input { Value = 1, },
         },
         ViewInfo = OperatorInfo { Pos = { 441, 138.747, }, },
      },
      Camera3D_Render = Camera3D {
         NameSet = true,
         Inputs = {
            ["Transform3DOp.Translate.X"] = Input { Value = 1.76864485260255, },
            ["Transform3DOp.Translate.Y"] = Input { Value = 1.17462058950023, },
            ["Transform3DOp.Translate.Z"] = Input { Value = 0.110861375811932, },
            ["Transform3DOp.UseTarget"] = Input { Value = 1, },
            ["Transform3DOp.Target.Z"] = Input { Value = -1.33070465317551, },
            AoV = Input { Value = 24.3265863475745, },
            ["Stereo.Mode"] = Input { Value = FuID { "Mono", }, },
            ["SurfacePlaneInputs.ObjectID.ObjectID"] = Input { Value = 1, },
            ["MtlStdInputs.MaterialID"] = Input { Value = 1, },
         },
         ViewInfo = OperatorInfo { Pos = { 312.868, 197.18, }, },
      },
      Cube3D1 = Cube3D {
         Inputs = {
            ["Transform3DOp.Translate.Z"] = Input { Value = -1.3122221362164, },
            ["SurfaceCubeInputs.ObjectID.ObjectID"] = Input { Value = 5, },
            FrontMaterialInput = Input {
               SourceOp = "Background1",
               Source = "Output",
            },
            RightMaterialInput = Input {
               SourceOp = "Background1",
               Source = "Output",
            },
            TopMaterialInput = Input {
               SourceOp = "Background1",
               Source = "Output",
            },
            MtlFace = Input { Value = 1, },
            ["Front.MtlStdInputs.MaterialID"] = Input { Value = 4, },
            ["Right.MtlStdInputs.MaterialID"] = Input { Value = 5, },
            ["Left.MtlStdInputs.Diffuse.Color.Red"] = Input { Value = 0.699999988079071, },
            ["Left.MtlStdInputs.Diffuse.Color.Green"] = Input { Value = 0.400000005960464, },
            ["Left.MtlStdInputs.Diffuse.Color.Blue"] = Input { Value = 0.699999988079071, },
            ["Left.MtlStdInputs.Diffuse.Opacity"] = Input { Value = 0, },
            ["Left.MtlStdInputs.MaterialID"] = Input { Value = 6, },
            ["Bottom.MtlStdInputs.Diffuse.Color.Green"] = Input { Value = 0, },
            ["Bottom.MtlStdInputs.Diffuse.Opacity"] = Input { Value = 0, },
            ["Bottom.MtlStdInputs.MaterialID"] = Input { Value = 7, },
            ["Top.MtlStdInputs.Diffuse.Color.Red"] = Input { Value = 0, },
            ["Top.MtlStdInputs.Diffuse.Color.Green"] = Input { Value = 0, },
            ["Top.MtlStdInputs.MaterialID"] = Input { Value = 8, },
            ["Back.MtlStdInputs.Diffuse.Opacity"] = Input { Value = 0, },
            ["Back.MtlStdInputs.MaterialID"] = Input { Value = 9, },
         },
         ViewInfo = OperatorInfo { Pos = { 518.407, 76.9257, }, },
      },
      Camera3D_Projection = Camera3D {
         NameSet = true,
         Inputs = {
            ["Transform3DOp.Translate.X"] = Input { Value = 0.645728318987074, },
            ["Transform3DOp.Translate.Y"] = Input { Value = 1.03938103689834, },
            ["Transform3DOp.Translate.Z"] = Input { Value = 0.939791053152777, },
            ["Transform3DOp.UseTarget"] = Input { Value = 1, },
            ["Transform3DOp.Target.X"] = Input { Value = 0.0922749248461967, },
            ["Transform3DOp.Target.Z"] = Input { Value = -1.40805415141271, },
            AoV = Input { Value = 24.3265863475745, },
            ["Stereo.Mode"] = Input { Value = FuID { "Mono", }, },
            ["SurfacePlaneInputs.ObjectID.ObjectID"] = Input { Value = 4, },
            ["MtlStdInputs.MaterialID"] = Input { Value = 3, },
         },
         ViewInfo = OperatorInfo { Pos = { 339.278, 64.9132, }, },
      },
      Background1 = Background {
         Inputs = {
            GlobalIn = Input { Value = 1, },
            GlobalOut = Input { Value = 100, },
            Width = Input { Value = 2048, },
            Height = Input { Value = 1556, },
            ["Gamut.SLogVersion"] = Input { Value = FuID { "SLog2", }, },
            TopLeftBlue = Input { Value = 0.661870503597122, },
            Gradient = Input {
               Value = Gradient {
                  Colors = {
                     [0] = { 0, 0, 0, 1, },
                     [1] = { 1, 1, 1, 1, },
                  },
               },
            },
            EffectMask = Input {
               SourceOp = "Polygon1",
               Source = "Mask",
            },
         },
         ViewInfo = OperatorInfo { Pos = { 518.407, 24.4818, }, },
      },
      Polygon1 = PolylineMask {
         DrawMode = "ModifyOnly",
         DrawMode2 = "InsertAndModify",
         Inputs = {
            BorderWidth = Input { Value = 0.01, },
            MaskWidth = Input { Value = 2048, },
            MaskHeight = Input { Value = 1556, },
            PixelAspect = Input { Value = { 1, 1, }, },
            ClippingMode = Input { Value = FuID { "None", }, },
            Polyline = Input {
               SourceOp = "Polygon1Polyline",
               Source = "Value",
            },
            Polyline2 = Input {
               Value = Polyline {
               },
               Disabled = true,
            },
         },
         ViewInfo = OperatorInfo { Pos = { 518.407, -23.6544, }, },
      },
      Polygon1Polyline = BezierSpline {
         SplineColor = { Red = 173, Green = 255, Blue = 47, },
         NameSet = true,
         KeyFrames = {
            [1] = { 0, Flags = { Linear = true, LockedY = true, }, Value = Polyline {
                  Points = {
                     { Linear = true, X = -0.25, Y = -0.429586083312272, RX = 0, RY = 0.281445071496825, },
                     { Linear = true, X = -0.25, Y = 0.414749131178203, LX = 0, LY = -0.281445071496825, RX = 0, RY = -0.104916377059401, },
                     { Linear = true, X = -0.25, Y = 0.1, LX = 0, LY = 0.104916377059401, RX = -0.0501527755178996, RY = 0, },
                     { Linear = true, X = -0.400458326553699, Y = 0.1, LX = 0.0501527755178996, LY = 0, RX = 0.249182819024366, RY = 0, },
                     { Linear = true, X = 0.3470901305194, Y = 0.1, LX = -0.249182819024366, LY = 0, RX = -0.0656967101731334, RY = 0, },
                     { Linear = true, X = 0.15, Y = 0.1, LX = 0.0656967101731334, LY = 0, RX = 0, RY = 0.109838616532753, },
                     { Linear = true, X = 0.15, Y = 0.429515849598258, LX = 0, LY = -0.109838616532753, RX = 0, RY = -0.209838616532753, },
                     { Linear = true, X = 0.15, Y = -0.2, LX = 0, LY = 0.209838616532753, RX = 0.0699821102875895, RY = 0, },
                     { Linear = true, X = 0.359946330862769, Y = -0.2, LX = -0.0699821102875895, LY = 0, RX = -0.233903616080793, RY = 0, },
                     { Linear = true, X = -0.341764517379611, Y = -0.2, LX = 0.233903616080793, LY = 0, RX = 0.163921505793204, RY = 0, },
                     { Linear = true, X = 0.15, Y = -0.2, LX = -0.163921505793204, LY = 0, RX = 0, RY = -0.0800203639159893, },
                     { Linear = true, X = 0.15, Y = -0.440061091747968, LX = 0, LY = 0.0800203639159893, },
                  },
               }, },
         },
      },
   },
}

Copy and paste the text above into your Flow.
Attachments
CameraProjection_UVWMap3D.jpg
CameraProjection_UVWMap3D.jpg (183.58 KiB) Viewed 9922 times
www.compositing.tv
Offline
User avatar

mattmerk

  • Posts: 46
  • Joined: Wed Nov 13, 2013 2:56 am
  • Location: United States

Re: Camera projections with transparency don't render transp

PostSun Jun 21, 2015 11:06 pm

Thanks for chiming in but, you are kidding, right? I mean, if such pro-user hack is what you need to do to simply project the same alpha channel data onto the geometry that you are doing with the RGB data, that is a non-started for any average user.

What I describe should be the default behavior.

Pasting code? Nah, man. Just no.
Matthew Merkovich
The Matchmoving Company

http://mattmerk.com
https://www.youtube.com/c/MatthewMerkovich
Offline
User avatar

mattmerk

  • Posts: 46
  • Joined: Wed Nov 13, 2013 2:56 am
  • Location: United States

Re: Camera projections with transparency don't render transp

PostSun Jun 21, 2015 11:51 pm

By the way, what I describe is the default behavior of Nuke. I typically hate "why doesn't it work like Nuke" comments, but in this case it is really more about user friendly default behaviors.

Put more concisely, the projection I set up works fine for the RGB data. Then I have to jump through cutting and pasting script data into my comp just to properly pass through the alpha data? That's silly.
Last edited by mattmerk on Mon Jun 22, 2015 12:58 am, edited 1 time in total.
Matthew Merkovich
The Matchmoving Company

http://mattmerk.com
https://www.youtube.com/c/MatthewMerkovich
Offline
User avatar

Johnny Farmfield

  • Posts: 190
  • Joined: Tue Feb 10, 2015 8:26 am
  • Location: Goteborg - Sweden

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 12:33 am

Oi,

Nuke guy here too and I think this layout will make it easier to get how Fusion "thinks" in this case, it's like a backwards project3D setup, instead of it feeding the projected texture into the mesh then into the scene/renderer, you pipe the texture into the mesh then pipe it through the texture projection...

ggg.jpg
You gotta love my snipping tool naming conventions. :D
ggg.jpg (38.82 KiB) Viewed 9910 times

And this is also how you use the Replace Material 3D and Replace Normals 3D nodes in Fusion, mesh into the material, merge with camera, then into the renderer. So it's not really a pro-user hack, it's just another day in Fusion. :D

Now, we can argue all day which method is more logical but in this forum I have a feeling you and I will be "wrong" all the time. ;)
*** Pushing pixels, vectors and voxels since 25 years - www.farmfield-vfx.com ***
Offline

Gregory Chalenko

  • Posts: 17
  • Joined: Wed Sep 17, 2014 3:38 pm
  • Location: London

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 9:02 am

mattmerk wrote:Thanks for chiming in but, you are kidding, right? I mean, if such pro-user hack is what you need to do to simply project the same alpha channel data onto the geometry that you are doing with the RGB data, that is a non-started for any average user.

What I describe should be the default behavior.

Pasting code? Nah, man. Just no.

Copying and pasting a set of nodes from a forum in text format has nothing to do with user friendliness of a comp software.
This is how people share the setups, and in Nuke community people of any level of experience do absolutely the same.

You don't have to work with code to setup a camera projection in Fusion.
But if you ask about how to set it up on a forum, you need to demonstrate the copy-pasting skill :)))
www.compositing.tv
Offline
User avatar

Johnny Farmfield

  • Posts: 190
  • Joined: Tue Feb 10, 2015 8:26 am
  • Location: Goteborg - Sweden

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 1:09 pm

Gregory Chalenko wrote:you need to demonstrate the copy-pasting skill :)))

2 hour BMD workshop on the subject or I say we have plausible deniability! :D
*** Pushing pixels, vectors and voxels since 25 years - www.farmfield-vfx.com ***
Offline
User avatar

mattmerk

  • Posts: 46
  • Joined: Wed Nov 13, 2013 2:56 am
  • Location: United States

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 1:42 pm

Johnny Farmfield wrote:
Gregory Chalenko wrote:you need to demonstrate the copy-pasting skill :)))

2 hour BMD workshop on the subject or I say we have plausible deniability! :D


Ha! Hilarious!

The even funnier thing is that I've been compositing for 20+ years, just like Gregory up there and many of those years were with Eyeon Digital Fusion. Now that it is in Blackmagic's hands (good job guys) it will become a priority to start taking market share from Nuke and entice new users. Seems to me that cutting and pasting node ascii would always seem daunting to a new user of any software. (I appreciate the irony of me asking for free advice and complaining when it isn't to my "standards" as well, so I thank Gregory for the effort. Thank you Gregory!)

I'm an RTFM guy and a "don't whine that it doesn't work like your pet software." However, "just copy and paste this code snippet into your flow" isn't really teaching a man to fish. In fact it is the biggest sin of which *I* am guilty: assuming everyone is as smart as me. (And often historically in my case, I was assuming I was smarter than I even was.)

So thanks Johnny for walking me through the process above and teaching this clearly stupid man to fish instead of ctrl+c/ctrl+v-ing a bunch of fish into my basket.
Matthew Merkovich
The Matchmoving Company

http://mattmerk.com
https://www.youtube.com/c/MatthewMerkovich
Offline

Gregory Chalenko

  • Posts: 17
  • Joined: Wed Sep 17, 2014 3:38 pm
  • Location: London

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 3:06 pm

mattmerk wrote:I'm an RTFM guy and a "don't whine that it doesn't work like your pet software."

Sorry, but are you sure, that the kind of setup you made first, would work for camera projection in Nuke?
I think in Nuke you need a Project3D node, which is in this case an equivalent to Fusion's UVWMap3D...

To me personally, the way Fusion makes camera projections seems very intuitive.

What is essentially a camera projection?
It's the way of texturing a piece of geometry, which uses texture coordinates, projected from a camera.

So, in Fusion you assign a texture to the 3D object, then apply a node which takes care of texture coordinates, select the type of the UV mapping and voilà! Was there any illogical step?

This is exactly what you do in 3D packages for any kind of texturing too.

Concerning Fusion behaving not exactly like Nuke... In Nuke itself different nodes behave each in it's own way, so if anyone would deliberately try to copy Nuke behaviour in another software, keeping it consistent at the same time, he would inevitably fail.

Think of this: in the Transform node and in ColorWheel node, positive rotation is counter clockwise. In the DirBlur node and Glint node, positive rotation is clockwise. So how can another software be consistent with Nuke if there is no consistency in Nuke itself?
www.compositing.tv
Offline
User avatar

Chad Capeland

  • Posts: 3307
  • Joined: Mon Nov 10, 2014 9:40 pm

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 3:50 pm

Johnny Farmfield wrote:
Now, we can argue all day which method is more logical but in this forum I have a feeling you and I will be "wrong" all the time. ;)


Not wrong at all, just too late. :)

Best advice is to read backwards in Fusion. What does the Renderer3D get? A scene. What is in the scene? A Camera3D and a UVMap3D. What's being UV mapped and how? A Cube3D by camera projection. What's the material on the Cube3D (the last thing the renderer will evaluate)? A BG. And that BG is masked with a Polygon.

So if you think about it the way Fusion would look at the scene tree, and probably more importantly by the data types of the connections, then you can see what's going on. There's no "UV mapping" datatype. It doesn't make sense. So how would you output it into a mesh? Rather, you have a mesh and that outputs as a 3D scene to a tool that modifies the mapping channels.

There's other design considerations that are consequences of this, like you can apply that UVMap to many upstream meshes easily.
Chad Capeland
Indicated, LLC
www.floweffects.com
Offline
User avatar

mattmerk

  • Posts: 46
  • Joined: Wed Nov 13, 2013 2:56 am
  • Location: United States

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 5:35 pm

I got it now. Johnny cleared it up.

My two main complaints were:

1) If the camera projection of the RGB data was correct (as it was) why did the alpha need special treatment? That's just silly. The RGB data mapped perfectly. I'd show you but the shot is on a show that hasn't aired yet. Take my word for it that the RGB data was mapped correctly. Just pretend that the blue image in my first screen capture was a single frame of a dead body on a table in a morgue and not just a solid blue image. That you have to take additional steps to get an alpha channel to map in exactly the same way as the already correctly mapped RGB data, well, that seems wrong. But fine. It's just how Fusion works. I just needed it spelled out in more stupidly simple terms, which Johnny did with style and grace. Bravo sir. I owe you a drink, at the very least.

2) The first responder to the post didn't explain it as though he was talking to someone not familiar with Fusion and suggested pasting ascii snippets into a flow. Y'all are sure to see a lot more noob questions now that Fusion is basically free. If you want noobs to be frustrated, then this approach is sure to meet your objective. Heck, I can see good reasons for thwarting potential up-comers from taking your compositing jobs! But if you want more user adoption, replies like Johnny's would be top notch.
Matthew Merkovich
The Matchmoving Company

http://mattmerk.com
https://www.youtube.com/c/MatthewMerkovich
Offline
User avatar

mattmerk

  • Posts: 46
  • Joined: Wed Nov 13, 2013 2:56 am
  • Location: United States

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 5:50 pm

And to be more clear, here's my typical workflow on shots like this:

1) Track the shot in SynthEyes and build some projection geometry from the point cloud. Output 3D track and geometry to Nuke.

2) Find a hero frame and paint up that frame as needed for the shot.

3) Duplicate the tracked camera for that hero frame I painted up in Nuke, removing all animation on the focal length of the lens and any xforms, locking the camera to its position and lens for the hero frame I painted. This is now the projection camera.

4) Project my painted up frame from the projection camera onto my projection geometry. Apply any needed roto via a roto node to blend the projected geometry when I do my downstream merge operation.

5) Shoot the projection object from the tracked camera (i.e. send output of tracked camera into the Nuke 3D scene and that to a render node.) This is where the roto in Nuke just works and where in fusion it needs UVs for a projection that clearly already works for the RGB data? (I still think this is weird.)

6) Merg the output of the render over the original BG.

I have done the same workflow in 3D apps for years and years, the only difference being that I have to be more specific in mapping what image data goes to what shader channel, but still, this is more intuitive than Fusion.

Again, the RGB data required nothing more than a projection set up through a camera.

But I get it now. Since the developers are unlikely to change/modify this behavior, I will adapt.
Matthew Merkovich
The Matchmoving Company

http://mattmerk.com
https://www.youtube.com/c/MatthewMerkovich
Offline
User avatar

Chad Capeland

  • Posts: 3307
  • Joined: Mon Nov 10, 2014 9:40 pm

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 5:59 pm

It might be a straight up bug, or you might be using the wrong projection mapping technique. It's hard to tell from your screengrab. Can you post those tools? There's no footage or anything in that, so it should be safe to share, right?
Chad Capeland
Indicated, LLC
www.floweffects.com
Offline
User avatar

michael vorberg

  • Posts: 943
  • Joined: Wed Nov 12, 2014 8:47 pm
  • Location: stuttgart, germany

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 8:10 pm

i think the simplest solution is to use the "catcher" texture node and set the projection in the camera or the "projector3d" to texture
the catcher node should also be described in the manual


and even if you dont like copy and paste from here, i leave a example comp:

Code: Select all
{
   Tools = ordered() {
      Note3 = Note {
         Inputs = {
            Comments = Input { Value = "this is the magic node you looked for?", },
         },
         ViewInfo = StickyNoteInfo {
            Pos = { -275, 49.5, },
            Flags = {
               Expanded = true,
            },
            Size = { 196, 179.3, },
         },
      },
      Catcher1 = TexCatcher {
         Inputs = {
            MaterialID = Input { Value = 2, },
         },
         ViewInfo = OperatorInfo { Pos = { -220, 115.5, }, },
      },
      Blinn1 = MtlBlinn {
         Inputs = {
            ["Diffuse.Color.Material"] = Input {
               SourceOp = "Catcher1",
               Source = "MaterialOutput",
            },
            ["Specular.Nest"] = Input { Value = 1, },
            ["Specular.Intensity"] = Input { Value = 0, },
            MaterialID = Input { Value = 4, },
         },
         ViewInfo = OperatorInfo { Pos = { 110, 115.5, }, },
      },
      Note1 = Note {
         Inputs = {
            Comments = Input { Value = "Blinn or other shader only needed if your projection needs light interaction\r\n", },
         },
         ViewInfo = StickyNoteInfo {
            Pos = { 55, 16.5, },
            Flags = {
               Expanded = true,
            },
            Size = { 196, 179.3, },
         },
      },
      Shape3D1 = Shape3D {
         Inputs = {
            ["Transform3DOp.Rotate.X"] = Input { Value = -54.5454545, },
            ["Transform3DOp.Rotate.Y"] = Input { Value = 1.8181818, },
            ["Transform3DOp.Rotate.Z"] = Input { Value = 1.8181818, },
            MaterialInput = Input {
               SourceOp = "Blinn1",
               Source = "MaterialOutput",
            },
            ["MtlStdInputs.MaterialID"] = Input { Value = 1, },
            ["SurfacePlaneInputs.Width"] = Input { Value = 5, },
            ["SurfacePlaneInputs.SubdivisionWidth"] = Input { Value = 100, },
            ["SurfacePlaneInputs.ObjectID.ObjectID"] = Input { Value = 1, },
         },
         ViewInfo = OperatorInfo { Pos = { 330, 115.5, }, },
      },
      Note2 = Note {
         Inputs = {
            Comments = Input { Value = "only to have a more interesting shape", },
         },
         ViewInfo = StickyNoteInfo {
            Pos = { 495, -16.5, },
            Flags = {
               Expanded = true,
            },
            Size = { 196, 179.3, },
         },
      },
      Displace3D1 = Displace3D {
         Inputs = {
            SceneInput = Input {
               SourceOp = "Shape3D1",
               Source = "Output",
            },
            Scale = Input { Value = 1, },
            Input = Input {
               SourceOp = "FastNoise1",
               Source = "Output",
            },
         },
         ViewInfo = OperatorInfo { Pos = { 550, 115.5, }, },
      },
      FastNoise1 = FastNoise {
         Inputs = {
            Width = Input { Value = 1920, },
            Height = Input { Value = 1080, },
            ["Gamut.SLogVersion"] = Input { Value = FuID { "SLog2", }, },
            Detail = Input { Value = 4.3333333, },
            Contrast = Input { Value = 0.8285714, },
            XScale = Input { Value = 4.3809524, },
            Angle = Input { Value = 22.5, },
            Seethe = Input { Value = 0.0961538, },
            Gradient = Input {
               Value = Gradient {
                  Colors = {
                     [0] = { 0, 0, 0, 1, },
                     [1] = { 1, 1, 1, 1, },
                  },
               },
            },
         },
         ViewInfo = OperatorInfo { Pos = { 550, 49.5, }, },
      },
      Merge3D1 = Merge3D {
         Inputs = {
            SceneInput1 = Input {
               SourceOp = "Displace3D1",
               Source = "Output",
            },
            SceneInput2 = Input {
               SourceOp = "Projector3D1",
               Source = "Output",
            },
            SceneInput3 = Input {
               SourceOp = "Camera3D1",
               Source = "Output",
            },
            SceneInput4 = Input {
               SourceOp = "SpotLight1",
               Source = "Output",
            },
         },
         ViewInfo = OperatorInfo { Pos = { 715, 115.5, }, },
      },
      Projector3D1 = LightProjector {
         Inputs = {
            ["Transform3DOp.Translate.Y"] = Input { Value = 0.842029800238799, },
            ["Transform3DOp.Translate.Z"] = Input { Value = 6.98180420482001, },
            ProjectiveImage = Input {
               SourceOp = "Background1",
               Source = "Output",
            },
            ProjectionMode = Input { Value = 2, },
            ["ShadowLightInputs3D.ShadowsEnabled"] = Input { Value = 0, },
         },
         ViewInfo = OperatorInfo { Pos = { 715, 280.5, }, },
      },
      Background1 = Background {
         Inputs = {
            Width = Input { Value = 1920, },
            Height = Input { Value = 1080, },
            ["Gamut.SLogVersion"] = Input { Value = FuID { "SLog2", }, },
            TopLeftRed = Input { Value = 1, },
            TopLeftGreen = Input { Value = 1, },
            Gradient = Input {
               Value = Gradient {
                  Colors = {
                     [0] = { 0, 0, 0, 1, },
                     [1] = { 1, 1, 1, 1, },
                  },
               },
            },
            EffectMask = Input {
               SourceOp = "Ellipse1",
               Source = "Mask",
            },
         },
         ViewInfo = OperatorInfo { Pos = { 715, 478.5, }, },
      },
      Camera3D1 = Camera3D {
         Inputs = {
            ["Transform3DOp.Translate.X"] = Input { Value = 3.1479908957275, },
            ["Transform3DOp.Translate.Y"] = Input { Value = 2.39392842213933, },
            ["Transform3DOp.Translate.Z"] = Input { Value = 3.81976682354125, },
            ["Transform3DOp.UseTarget"] = Input { Value = 1, },
            AoV = Input { Value = 24.3265863475745, },
            ["Stereo.Mode"] = Input { Value = FuID { "Mono", }, },
            ["SurfacePlaneInputs.ObjectID.ObjectID"] = Input { Value = 2, },
            ["MtlStdInputs.MaterialID"] = Input { Value = 3, },
         },
         ViewInfo = OperatorInfo { Pos = { 715, 16.5, }, },
      },
      Note4 = Note {
         Inputs = {
            Comments = Input { Value = "can also be a camera with the same projection settings", },
         },
         ViewInfo = StickyNoteInfo {
            Pos = { 660, 214.5, },
            Flags = {
               Expanded = true,
            },
            Size = { 196, 179.3, },
         },
      },
      Ellipse1 = EllipseMask {
         Inputs = {
            SoftEdge = Input { Value = 0.0638095, },
            MaskWidth = Input { Value = 1920, },
            MaskHeight = Input { Value = 1080, },
            PixelAspect = Input { Value = { 1, 1, }, },
            ClippingMode = Input { Value = FuID { "None", }, },
         },
         ViewInfo = OperatorInfo { Pos = { 715, 544.5, }, },
      },
      Renderer3D1 = Renderer3D {
         Inputs = {
            Width = Input { Value = 1920, },
            Height = Input { Value = 1080, },
            ["Gamut.SLogVersion"] = Input { Value = FuID { "SLog2", }, },
            SceneInput = Input {
               SourceOp = "Merge3D1",
               Source = "Output",
            },
            CameraSelector = Input { Value = FuID { "Camera3D1", }, },
            ["RendererSoftware.LightingEnabled"] = Input { Value = 1, },
            ["RendererSoftware.ShadowsEnabled"] = Input { Value = 1, },
         },
         ViewInfo = OperatorInfo { Pos = { 1045, 115.5, }, },
      },
      SpotLight1 = LightSpot {
         Inputs = {
            ["Transform3DOp.Translate.X"] = Input { Value = -2.45286129779659, },
            ["Transform3DOp.Translate.Y"] = Input { Value = 1.41536062264283, },
            ["Transform3DOp.Translate.Z"] = Input { Value = 6.08244965274151, },
            ["Transform3DOp.UseTarget"] = Input { Value = 1, },
         },
         ViewInfo = OperatorInfo { Pos = { 1045, 247.5, }, },
      },
   },
}
Offline

Gregory Chalenko

  • Posts: 17
  • Joined: Wed Sep 17, 2014 3:38 pm
  • Location: London

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 8:14 pm

mattmerk wrote:I got it now. Johnny cleared it up.

My two main complaints were:

1) If the camera projection of the RGB data was correct (as it was) why did the alpha need special treatment? That's just silly. The RGB data mapped perfectly. I'd show you but the shot is on a show that hasn't aired yet. Take my word for it that the RGB data was mapped correctly. Just pretend that the blue image in my first screen capture was a single frame of a dead body on a table in a morgue and not just a solid blue image. That you have to take additional steps to get an alpha channel to map in exactly the same way as the already correctly mapped RGB data, well, that seems wrong. But fine. It's just how Fusion works. I just needed it spelled out in more stupidly simple terms, which Johnny did with style and grace. Bravo sir. I owe you a drink, at the very least.

Looking at the screenshot in the first message, I suspect that what we see, is in fact, the Background1 projected on the Camera01_1's Image Plane, which in 3D space sits in front of the 3D object coming from Trk_Mesh01.

The mesh itself might be just black in 3D space, if you look at it from a different angle.

So, if you unplug Trk_Mesh01 from Merge3D1, you should see your texture with properly working alpha, rendered over transparent background.
www.compositing.tv
Offline
User avatar

Johnny Farmfield

  • Posts: 190
  • Joined: Tue Feb 10, 2015 8:26 am
  • Location: Goteborg - Sweden

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 10:02 pm

The discussion got so blown out of proportion I suspect we're having a Castle Bravo scenario here. Now, who forgot to include the neutron generation from the shell of the equation behind the question at hand? :D

And about the above reference, hehe, I'm doing a nuclear explosion setup tomorrow and since I'm happy having decided to save 12 hours of simulation time in Houdini by faking the **** out of a 6K shot using NTSC cloud chamber assets from the 1980's, the question if I use Nuke or Fusion to do it just doesn't feel like very relevant, either way I'm gonna have fun!
*** Pushing pixels, vectors and voxels since 25 years - www.farmfield-vfx.com ***
Offline
User avatar

mattmerk

  • Posts: 46
  • Joined: Wed Nov 13, 2013 2:56 am
  • Location: United States

Re: Camera projections with transparency don't render transp

PostMon Jun 22, 2015 10:11 pm

Basically, my comp exactly follows this youtube tutorial:



That is not to say this was the "right" way to do it, but it was what I had in the middle of actually doing shots on a deadline.
Matthew Merkovich
The Matchmoving Company

http://mattmerk.com
https://www.youtube.com/c/MatthewMerkovich
Offline

Gregory Chalenko

  • Posts: 17
  • Joined: Wed Sep 17, 2014 3:38 pm
  • Location: London

Re: Camera projections with transparency don't render transp

PostTue Jun 23, 2015 9:49 am

Ah, OK, this method, mentioned by Stefan at the very start, was the first one available in Fusion before more solid options like UVWMap3D in Camera mode and the Catcher emerged. Sorry, I totally forgot about it.

Back in the days, people would project the alpha as separate pass and attach to RGB via MatteControl or something.

I don't think there is anything bad about it still being available for compatibility or whatever artistic purposes...
www.compositing.tv
Offline
User avatar

Chad Capeland

  • Posts: 3307
  • Joined: Mon Nov 10, 2014 9:40 pm

Re: Camera projections with transparency don't render transp

PostTue Jun 23, 2015 1:16 pm

Gregory Chalenko wrote:I don't think there is anything bad about it still being available for compatibility or whatever artistic purposes...


It's designed for light projection. That's how it's labeled, too. Imagine you had a projector shining an image onto a wall. There'd be no alpha contribution at all. The wall doesn't become transparent. It's not just projecting an image like a film, you could do gobos or non-uniform light distribution based on reflectors or lenses. It's not just a legacy thing at all, it's great for shining more interesting lights than you'd get out of a normal Spotlight.
Chad Capeland
Indicated, LLC
www.floweffects.com
Offline

Sander de Regt

  • Posts: 4130
  • Joined: Thu Nov 13, 2014 10:09 pm

Re: Camera projections with transparency don't render transp

PostTue Jun 23, 2015 7:57 pm

I am pretty sure I did it the way Matt was trying/hoping to do back in 5.x of Fusion already.
The shot is in my showreel



it's the shot with the rope bridge. Maybe I can even find the *.comp if I dig deep.
There was some geometry to project the rocks on. I used a mask on a matte control tool to cut out the unwanted parts and projected the rocks with the alpha included. It worked flawlessly even back in 2007 or so. I haven't used Fusion for this lately, but I'm pretty sure it still works this way.

I even demoed this behavior at IBC for Eyeon with this footage.
I can't compare it to Nuke, since I don't know the program, but it shouldn't be as cumbersome as you describe.
Sander de Regt

ShadowMaker SdR
The Netherlands
Offline
User avatar

mattmerk

  • Posts: 46
  • Joined: Wed Nov 13, 2013 2:56 am
  • Location: United States

Re: Camera projections with transparency don't render transp

PostWed Jun 24, 2015 1:43 am

Thanks guys. I am slowly getting it. For some reason 3D in Nuke just came way more easily to me. I still cant figure out the catcher. I'd love to see some tutorials that went over this a little more comprehensively, tool by tool.

I'm already seeing other areas that are perplexing. No bicubics for image planes, for example? But I'll keep banging my head against the 3D projection wall until that final neural pathway forms and I make the cognitive leap.

I really appreciate the help.

Oh, and I get the flow copying and pasting now. Works just like nuke/shake. I was just looking for more hand holding as to why thinks worked the way they did and also more step by step instructions of how the nodes should be wired.

I know. I know. This is all free advice I am getting here. It is much appreciated.
Matthew Merkovich
The Matchmoving Company

http://mattmerk.com
https://www.youtube.com/c/MatthewMerkovich
Offline
User avatar

Stefan Ihringer

  • Posts: 257
  • Joined: Mon Nov 10, 2014 10:40 pm

Re: Camera projections with transparency don't render transp

PostWed Jun 24, 2015 4:53 pm

I've described the catcher in my video, haven't I? ;-)
Also in the other one with the dog on the lawn.
blog and Fusion stuff: http://comp-fu.com/2012/06/fusion-script-macro-collection/
Offline
User avatar

mattmerk

  • Posts: 46
  • Joined: Wed Nov 13, 2013 2:56 am
  • Location: United States

Re: Camera projections with transparency don't render transp

PostThu Jun 25, 2015 2:52 am

I get it now. Thanks again everyone. The catcher was the key to doing this for me and I think I have it. Thank you all again. Much appreciation.
Matthew Merkovich
The Matchmoving Company

http://mattmerk.com
https://www.youtube.com/c/MatthewMerkovich

Return to Fusion

Who is online

Users browsing this forum: No registered users and 22 guests