Tue Jan 10, 2017 3:41 pm
Thans for the answer, Jonathan
I dont think that's the "phisically" correct render.
It's true that DoF is in some way prior to MB. You have a brurred image in your sensor due to lens behaviour (DoF), and then it's exposed along a time interval, and if it's moving, you get MB.
In real life cameras there's no problem with that. You get a twice blurred image, perfectly smooth, because in the real life there's no time segmentation. Time is analogic ( :
Another different thing is the CGI simulation of those effects. Both are based on addition and interpolate of samples. Theres something in the Fusion8 algorithms, some kind of interpolation or optimization, that ruins the trick when you have to combine samples along time (MB) with samples in space (DoF).
I've tested rendering DoF and then using the Motion Blur filter in Davinci Resolve. But that filter makes artifacts where you have strong paralax, and the project I'm involved has massive parallax. (Its a kind of 3D HUD with graphics, text, wireframe 3d objects a video layers. A nightmare for any post-render MB filter, even optical flow.
The other way could be rendering with MB and try to apply a decent Z-channel based DoF in post. But again, a lot of layers of motion blurred wireframes is a nightmare for any DoF filter, even with z-buffer.
The only HQ workarounds I guess are:
- Render with DoF by layers, then apply Resolve's MB to individual elements (much less parallax, or none for most elements, will work quite fine most of cases).
- Render with DoF at 240fps and then convert that to a motion-blurred 24fps clip, using frame blending. That would simulate a 360º aperture/10 samples motion-blur. With a script it might be possible to render only the needed frames of a 1200fps timeline, concentrating the samples around the current frame. That would simulate shorter exposures and control quality. And you can process the samples with a good frame interpolation filter before adding them all...
I'm thinking while I write... modern frame interpolation effects based on pixel tracking would be great motion-blur simulators, just implementing frame blending with bias adjust.
Obviously I'm taking the first way in this low-cost project. Render by layers and apply Resolve's MB before compositing. That will probably be enough.
Hope new versions of Fusion can just handle MB and DoF properly, even if it means multi-pass render and big rendertime increment.