Piojan wrote: I would like to be able fully control the composition in fusion so that it is flexible and where I can decide how many frames each shot lasts, so that I can control the speed of the animation. How can I do this almost entirely in fusion? I also would like to be able to extend the composition if needed.
If you are working with resolve and using fusion page, than the overall fusion composition will be controlled on the edit page, where you also have option to do re-timing with more advance algorithms. The overall composition length or clip length will be controlled by the edit page, but you can use also loader node or drag and drop from media pool, clips in fusion page and than you will have control over not overall fusion composition, but of how many frames is per asset.
The same controls are also available if you use fusion studio, but there you can control all of it from fusion. Also if you use fusion studio, once you bring in clips of certain length int your composition but would like composition to match the length of the clip, you can shift and drag the clip (loader) onto timeline and it will match.
Anyway, loaders and media in when brought in via media pool, have similar options to control frames.
The Global In and OutThe Global In and Out handles are used to specify the position of this node within the project. Use Global In to specify the frame on which that the clip starts and use Global Out to specify the frame on which the clip ends within the project’s global range. The node does not produce an image on frames outside of this range.
If the Global In and Out values are decreased to the point where the range between the In and Out values is smaller than the number of available frames in the clip, Fusion automatically trims the clip by adjusting the Trim range control. If the Global In/Out values are increased to the point where the range between the In and Out values is larger than the number of available frames in the clip, Fusion automatically lengthens the clip by adjusting the Hold First/Last Frame controls. Extended frames are visually represented in the range control by changing the color of the held frames to green in the control.
To slide the clip in time or move it through the project without changing its length, place the mouse pointer in the middle of the range control and drag it to the new location, or enter the value manually in the Global In value box.
Trim The Trim range control is used to trim frames from the start or end of a clip. Adjust the Trim In to remove frames from the start and adjust Trim Out to specify the last frame of the clip. The values used here are offsets. A value of 5 in Trim In would use the fifth frame in the sequence as the start, ignoring the first four frames. A value of 95 would stop loading frames after the 95th.
Hold First Frame/Hold Last Frame: The Hold First Frame and Hold Last Frame controls hold the first or last frame of the clip for the specified amount of frames. Held frames are included in a loop if the footage is looped.
ReverseSelect this checkbox to reverse the footage so that the last frame is played first, and the first frame is played last.
Loop Select this checkbox to loop the footage until the end of the project. Any lengthening of the clip using Hold First/Last Frame or shortening using Trim In/Out is included in the looped clip.
................................
As a general rule fusion works with frames not frames per second, so its good think think in those terms. If you want to control speed of clips and method of interpolation of frames, you will probably want to or need to use optical flow to generate motion vectors and than various time tools for time adjustments.
Optical Flow NodeThis node analyzes a clip connected to its input using an Optical Flow algorithm. Think of optical flow as a per-pixel motion vector that matches up features over several frames.
The computed optical flow is stored within the Vector and Back Vector aux channels of the output. These channels can be used in other nodes like the Vector Motion Blur or Vector Distort. However, Optical Flow must render twice when connecting it to a Time Stretcher or Time Speed node. These nodes require the channels A. FwdVec and B. BackVec in that order, but Optical Flow generates A. BackVec and A. FwdVec when it processes.
If you find that optical flow is too slow, consider rendering it out into OpenEXR files using a Saver node.
If the footage input flickers on a frame-by-frame basis, it is a good idea to deflicker the footage beforehand.
The Time Speed node allows image sequences to be sped up, slowed down, reversed, or delayed. Image Interpolation offers smooth, high-quality results. Time Speed should be used for static speed changes or to introduce delays in the footage. To apply animated changes in time, such as accelerating or decelerating time, use a Time Stretcher instead.
When operating in Flow mode, Optical Flow data is required. This node does not generate optical flow directly. You have to create it upstream using an Optical Flow node or by loading the forward/reverse vector channels from the image.
TimeSpeed does not interpolate the aux channels but instead destroys them. In particular, the Vector/ BackVector channels are consumed and destroyed after computation.
Add an Optical Flow after the Time Speed node if you want to generate flow vectors for the retimed footage.
The Time Stretcher node is similar to the Time Speed node but permits the speed of the clip to be animated. Full spline control of the effect is provided, including smoothing. As a result, the Time Stretcher can be used to animate a single clip to 200, back to normal speed, pause for a second, and then play backward.
Image interpolation offers smooth, high-quality results, all using a spline curve to adjust time nonlinearly. To apply constant time changes such as frame rate changes, use a Time Speed instead.
When operating in Flow mode, Optical Flow data is required. This node does not generate optical flow directly; you must create it manually upstream using an Optical Flow node or by loading the forward/ reverse vector channels from disk.
Flow Stretcher does not interpolate the aux channels but instead destroys them. In particular, the Vector/BackVector channels are consumed/destroyed. Add an Optical Flow after the Flow Stretcher if you want to generate flow vectors for the retimed footage.
Reactor (depository of fusion macros, fuses, scripts etc) has slightly modifier time stretcher node (
TimeStrecherPlus)and few others you can try out.
TimeHold, Retimer, TimeMapper etc. Not sure if all are on reactor, but you can search for them online.
Also try what Sander de Regt has referenced:
Time Machine Fuse - A tool designed to greatly simplify very complex retiming of just about anything in Fusion.
Piojan wrote: Also, how can I control the motion blur between the two images?
If you already have managed to generate motion vectors with optical flow node, than you can use
Vector Motion Blur node to turn them into motion blur.
This node is used to create directional blurs based on a Motion Vector map or AOV (Arbitrary Output Variable) channels exported from 3D-rendering software like Arnold, Renderman, or VRay. You can also generate motion vectors using the Optical Flow node in Fusion.
The vector map is typically two floating-point images: one channel specifies how far the pixel
is moving in X, and the other specifies how far the pixel is moving in Y. These channels may be embedded in OpenEXR or RLA/RPF images, or may be provided as separate images using the node’s Vectors input.
The vector channels should use a float16 or float32 color depth, to provide + and – values.
A value of 1 in the X channel would indicate that pixel has moved one pixel to the right, while a value of –10 indicates ten pixels of movement to the left.
It produces decent resolute with easy enough examples, but its not great. Similar to that is
MotionBlur openFX that ships with Resolve Studio, and by far my favorite third party plug in is
Revision Effects: RSMB (Real Smart Motion Blur), which produces best results I think so far. Although Boris FX continuum has new motion blur based on
optical flow ML (machine learning). Those are some of your options.
You can also try something like directional blur or frame average nodes in fusion, but they need some tweaking and set up and are more stylized way to make motion blur.
Additionally for stop motion type effect there is
openFX stop motion filter that ships with resolve studio and you can find on We Suck Less forum a macro called
StopMotionIt that offers a bit more control and its available for all version of fusion.