Jump to: Board index » General » Fusion

3D Compositing Confusion

Learn about 3D compositing, animation, broadcast design and VFX workflows.
  • Author
  • Message
Offline

Nicolo Regalbuto

  • Posts: 1
  • Joined: Wed Jul 19, 2017 8:54 pm

3D Compositing Confusion

PostWed Jul 19, 2017 9:07 pm

Hi! I am a beginner at compositing, my background is in realtime graphics.

I have been using Fusion for one day, and it is fun so far but I have a fundamental confusion about workflows.

What is the point of importing geometry into the compositor and re-rendering? I don't understand it at all. Why would somebody use the Fusion renderer instead of Renderman or whatever? What do you gain? Shouldn't the compositor just be assembling EXRs?

I have spent several hours googling, and I haven't found anything that explains it. I see that V-Ray has introduced Nuke integration that supports OpenVDB volumes just recently. If anything, that points to more rendering happening in the compositor. Why is this? Isn't it the layout artist's job to place things way early in the pipeline? And the lighting artist's job to extract scene lighting from the mirrorball and apply it to the renders? Why would the compositing artist have to render?

If anybody has links comparing and contrasting these two workflows, I would be highly appreciative! So confused!


Nick
Offline
User avatar

Andrew Hazelden

  • Posts: 536
  • Joined: Sat Dec 06, 2014 12:10 pm
  • Location: West Dover, Nova Scotia, Canada

Re: 3D Compositing Confusion

PostThu Jul 20, 2017 4:17 pm

Nicolo Regalbuto wrote:What is the point of importing geometry into the compositor and re-rendering? I don't understand it at all. Why would somebody use the Fusion renderer instead of Renderman or whatever? What do you gain?


Fusion's 3D workspace is great for quickly adding things like motion graphics elements to a shot, basic particles and effects, or other elements that don't require the advanced features of a production renderer like V-Ray, RenderMan or Arnold.

Not all 3D elements in a composite need the look of a bi-directional pathtracer with global illumination. As an example, if you used a surface shader or an environment mapped reflective material on an object in a production renderer then you could get the exact same quality of output in near real-time using Fusion's renderer.

If you wanted to add rain, snow, fog, or haze ontop of your live action footage, you could load in low resolution stand-in meshes into Fusion's 3D workspace and those meshes could interact with Fusion's particles system. Fusion's renderer would allow for a far higher degree of interactivity for the compositor then having to get a different artist to re-render those same CG elements multiple times as a shot changes.

If you tracked a video clip in SynthEyes you can then use Fusion's 3D environment to precisely place greenscreen keyed elements at their correct location in the shot using using the footage mapped on an image plane that stays locked in their correct location during a tracking camera move. Having the ability to load in photogrammetry generated mesh elements into the Fusion scene or point clouds can be quite handy too.

A loaded in alembic/OBJ/FBX based polygon mesh in Fusion's 3D workspace could be used to provide an automatic masking effect which would cut down on the manual rotoscoping required.

Having 3D in a compositing package isn't meant to be used to replace a Maya/Max/Houdini based 3D artist's giant 1 billion polygon scene that has thousands of instanced objects. It is there to help a compositor quickly solve simple needs in a shot that might not need a dedicated 3D artist's help.
Mac Studio M2 Ultra / Threadripper 3990X | Fusion Studio 18.6.4 | Kartaverse 6
Offline

Hendrik Proosa

  • Posts: 3034
  • Joined: Wed Aug 22, 2012 6:53 am
  • Location: Estonia

Re: 3D Compositing Confusion

PostThu Jul 20, 2017 6:52 pm

3D space is also very useful for all kinds of paint fixes, object removals, masking and so on. Projecting image onto simple geometry, doing the paint work in texture space and reprojecting back through the same (or some other) camera will make a lot of things much easier because you can remove the motion and lock image to texture space.

If you are not familiar with camera projections, you can think of it as a reversed rendering. In rendering you have an object with texture that is rendered to screen space by render engine. Camera projection reverses the process: you project the screen space image onto geometry and get the texture, which is locked to object surface. Might seem like a useless thing for 3D, but for live action work it is very useful. There the geometry and camera usually come from matchmove solve.
I do stuff.
Offline
User avatar

Bryan Ray

  • Posts: 2488
  • Joined: Mon Nov 28, 2016 5:32 am
  • Location: Los Angeles, CA, USA

Re: 3D Compositing Confusion

PostThu Jul 20, 2017 9:17 pm

And sometimes doing the 3D in the compositor is just faster and cheaper than round-tripping to 3D software. For instance, we did a number of blade extension shots with 3D geometry animated and rendered entirely in Fusion. Textures came from Substance Painter, a we had a panoramic HDR for reflections. Looked terrific, and it let us keep our 3D artists on more complex shots.
Bryan Ray
http://www.bryanray.name
http://www.sidefx.com
Offline
User avatar

Thomas Milde

  • Posts: 319
  • Joined: Tue Aug 28, 2012 11:27 am
  • Location: Augsburg, Germany

Re: 3D Compositing Confusion

PostFri Jul 21, 2017 9:42 am

I'm only guessing here, but I think, the TO didn't realize that Fusion also works in 3D space and not only as a 2D compositor.
This question seems to say it: "Shouldn't the compositor just be assembling EXRs?"
This video shows a basic introduction. It still says Eyeon, but it is Blackmagic's Fusion now.

Tech:
Win 10 pro, Ryzen 9 5900X, 128 GB, M.2 for media
RTX 3080 ti, Intensity Pro, DVR Studio + Fusion 18.x, Desktop Video 12.x
Offline

Vladimir LaFortune

  • Posts: 120
  • Joined: Mon Nov 17, 2014 3:37 am

Re: 3D Compositing Confusion

PostSun Jul 23, 2017 3:35 pm

I'm the proponent of Fusion having a more robust rendering implementation at least like Keyshot besides current OpenGL. It is very convenient for solo freelancers as well as small studios that don't have access nor time for quick multiple re-renders of 3D scene. For simple stuff such as short promos I find more intuitive to do all the camera work inside Fusion for all the 3D objects and scenes than to go to Houdini or whatever build the scene over there, render it, import it, watch it unfold and then go back to Houdini once I find out it doesn't play well in transition or with 2D elements. It's a waste of time which you can't justify to customers outside of Film industry.

Also creativity sparks fly a lot more often when you work in single environment, some accidents or coincidences that happen in the workflow can be a breakthrough really.

Anyhow here is how we tackled the animation for some jewelry infomercial. I've built faux gem shader all inside Fusion 3D space. For this type for project one shader was enough just like one gem 3D model was enough but if we were about to be serious (and compensated accordingly) I would have built few more faux shaders so each one could be more distinct than the other.

Image
All rendered inside Fusion (DoF removed for clarity), pay attention to red stripe string on the diamond which gives away its a faux shader. Of course untrained eye of the viewer would never catch that.

Image
Another giveaway of non physical rendering is unrealistic shadow and no caustics on the floor even though that could easily be achieved with some fast noise.
Offline

Jeff Ha

  • Posts: 160
  • Joined: Tue Apr 14, 2015 10:38 am

Re: 3D Compositing Confusion

PostTue Jul 25, 2017 10:40 am

Fusion needs to open it up so 3rd party renderers like Vray and Arnold can be used (like in Nuke). They just have to entice more 3rd party developers by proving Fusion is a viable platform to build to.
Offline
User avatar

michael vorberg

  • Posts: 943
  • Joined: Wed Nov 12, 2014 8:47 pm
  • Location: stuttgart, germany

Re: 3D Compositing Confusion

PostTue Jul 25, 2017 2:21 pm

You can already get the sdk and build a plugin for external render engines.
Offline
User avatar

Chad Capeland

  • Posts: 3025
  • Joined: Mon Nov 10, 2014 9:40 pm

Re: 3D Compositing Confusion

PostWed Jul 26, 2017 3:03 pm

Yeah, you've been able to use 3rd party renderers since at least 6.0.

The issue seems to be that few companies are willing to just release their renderer libs to the public.
Chad Capeland
Indicated, LLC
www.floweffects.com

Return to Fusion

Who is online

Users browsing this forum: No registered users and 29 guests