Fri Apr 25, 2025 12:25 pm
Yes. It's all an illusion, really. Interpolation process. Inventing that which is not there. And some illusions can be more effective depending on the type of shot. Optical flow, specifically, can look fantastic and run smoothly if the motion vectors move linearly in one direction without interruption. However, when something crosses its path and new motion vectors are created, they conflict with the original vectors, causing what we call artifacts.
The "warp speed" algorithm may attempt to compensate for this, but it can over-compensate or encounter scenarios it wasn't trained on, depending on its training model. The "speed faster" model is newer, so it's possible it was trained more efficiently to be faster and produce similar quality, especially in certain shots. Either way, at least we have options.
For very difficult shots, you can use depth maps, magic masks, or manual rotoscoping to isolate parts and process them in multiple passes with appropriate algorithms. However, this is more like VFX work than something an editor or colorist would typically do.
Fusion tools also use forward and backward motion vectors from optical flow and can be powerful in stereoscopic workflows, for which they were originally designed. In stereoscopic workflows, there can be two images and two virtual cameras, each with its own set of motion vectors, generated by otpical flow. Allowing you to choose how to layer them and create conflicts. All of this can involve a lot of manual work, so it's great that we have AI tools to speed up many scenarios. But when they fail, or when an specific VFX effect is needed, or during the integration of VFX effects into existing slow-motion or variable-speed shots, it's good to have other tools available.