Jump to: Board index » General » Fusion

Limit of clip length Fusion can handle?

Learn about 3D compositing, animation, broadcast design and VFX workflows.
  • Author
  • Message
Offline

footofwrath

  • Posts: 221
  • Joined: Sun Mar 07, 2021 2:36 pm
  • Real Name: Andrew Longhurst

Limit of clip length Fusion can handle?

PostMon Oct 04, 2021 9:08 pm

Hi friends,

I'm trying to run a Spherical Stablizer pass (in standalone Fusion) over a series of clips that make up a 2hr continuous video. I am trying 30min clips but they run into trouble.. 4 out of 5 times they crash/freeze at 80-90% completion.. few times at 99% completion. :roll:
This is an 8K board as well so I am using 1920x960 proxies to generate the Stablizer node in Fusion and then copy that into Resolve on top of the clips.

So what should my maximum clip length be to ensure that Fusion can successfully parse and complete the Spherical stabilizer successfully? Will it depends on computer resources? I have managed to complete 2 so far, the largest at 54000 frames, but the 3rd, 51299 frames, is being a lot stickier.
What is the recommended limit that Fusion can handle safely?
Resolve Studio 18.6.3
Ryzen 9 5950x || RTX3090 || 64Gb 3600Mhz || Intel X520 10Gbe
MacBookPro M1 Max || 32C GPU || 64Gb || QNAP T310G1S SFP+ 10Gbe
QNAP TVS-871 @ 74tb formatted R6, Mellanox MCX312b 10Gbe
Mikrotik switches & routers
Offline
User avatar

TheBloke

  • Posts: 1905
  • Joined: Sat Nov 02, 2019 11:49 pm
  • Location: UK
  • Real Name: Tom Jobbins

Re: Limit of clip length Fusion can handle?

PostTue Oct 05, 2021 2:25 pm

Firstly, to answer the question in the title: the hard limit on Fusion compositions is 1,000,000 frames. This appears to be a hardcoded limitation in the Global Out parameter. You can have compositions longer than that length, but all nodes will stop processing at the 1 million frame mark. As this is based on an arbitrary frame count, not timecode, the length in time would depend on the frame rate. But even at 60 FPS you won't hit this limit unless attempting to process videos of over 4.5 hours.

OK, so as to the specific question: there appears to be a significant inefficiency in Fusion related to the saving of large number of keyframes.

The Spherical Stabilizer - like the Tracker and Planar Tracker - appears to operate in two steps:

1. Do the analysis/tracking
2. Save all the keyframes

Step 1 appears to perform fairly consistently, with performance steady across varying render lengths.

Step 2 however appears to have a major inefficiency. Firstly it's not multithreaded. It runs in the GUI thread, and this both locks up Fusion while it's running, and imposes a significant bottleneck.

Worse, the time taken for keyframe processing is not linear - the more keyframes, the longer it takes per keyframe. It scales very poorly, in other words.

Here's some tests I ran on the Spherical Stabilizer in Fusion Studio 17.3.1:

Source media & Node Structure: Input image is 320x180 int8 Background with Rectangle mask
Image Image

Test method: Simply clicking Track Forward on the SStab node, then clicking back to the Background node while the tracking ran. This last step was to avoid adding extra UI delay for the display of the keyframes on the timeline bar (which I don't think is that big, but just in case.)

I used a modified version of Andrew Hazelden's Action Listener script to semi-automate the logging to get more accurate stats without needing to manually time the keyframe portion with a stopwatch.

Tests run: 10K, 25K and 50K frames.

Results with some stats:
Image

Conclusion: The time taken to process and add keyframes increases non-linearly as the render range increases

The time to process 1000 keyframes increases from 3.51s in a 10k test to 22.56s in a 50k test.

My advice would be to never process more than 50k frames at once, and ideally fewer. The more frames you do in one batch, the less efficient it gets.

In my tests I used an unrealistic source image, very small, so that the tracking portion was as quick as it could reasonably be. In a real life test at 1080p the % of time taken by actual tracking will increase, but the keyframe time will likely stay the same. So eg at 50k frames instead of seeing 86% keyframe time as I did, it might only be 30 or 40%. But it'll still take more time overall then 2 x 25k batches would, for example.

Sadly I'm not sure if there's any way to automate the clicking of the Track Forward button. So you do also have to factor in your time in clicking that button for 100 individual comps, or whatever it might be.
Resolve Studio 17.4.3 and Fusion Studio 17.4.3 on macOS 11.6.1

Hackintosh:: X299, Intel i9-10980XE, 128GB DDR4, AMD 6900XT 16GB
Monitors: 1 x 3840x2160 & 3 x 1920x1200
Disk: 2TB NVMe + 4TB RAID0 NVMe; NAS: 36TB RAID6
BMD Speed Editor
Offline

footofwrath

  • Posts: 221
  • Joined: Sun Mar 07, 2021 2:36 pm
  • Real Name: Andrew Longhurst

Re: Limit of clip length Fusion can handle?

PostTue Oct 05, 2021 6:44 pm

Thanks for that insight man, very informative.

The clips I'm attempting are pretty much bang-on 50K frames each. Sometimes they crash as early as frame 5k.

I have managed to complete 3 of the 4, eventually. Guess I don't really have much choice other that to try to split up my remaining clip into two pieces, not ideal but as you say - better than watching the clip fail a dozen more times heh.

I can cope with letting the guy run for 2 or 3 hours --if-- it would stay stable and complete; managing multiple smaller clips adds significant management overhead for a sequence that is going to be virtually unchanged from the raw stream.

Small aside: any idea if there's a way to set the default Fusion frame scope to 'entire clip' instead of just the first 1000 frames?
Resolve Studio 18.6.3
Ryzen 9 5950x || RTX3090 || 64Gb 3600Mhz || Intel X520 10Gbe
MacBookPro M1 Max || 32C GPU || 64Gb || QNAP T310G1S SFP+ 10Gbe
QNAP TVS-871 @ 74tb formatted R6, Mellanox MCX312b 10Gbe
Mikrotik switches & routers
Offline

footofwrath

  • Posts: 221
  • Joined: Sun Mar 07, 2021 2:36 pm
  • Real Name: Andrew Longhurst

Re: Limit of clip length Fusion can handle?

PostTue Oct 05, 2021 7:27 pm

OK well I decided to be a tiny bit more patient and wait it out. On the 51299-frame clip, when the pass does in fact manage to reach the end (or, rather, frame 51297..) it took 10m51s - according to the status output, The "actual" time though, meaning to then save the keyframes I suppose, took another *31mins* on top of the that. So definitely correlates with your findings earlier heh.

Conversely, the export of these clips from their original format (h264) to Prores takes around 4 hours per 30mins and can only be done in serial so unless I can make significant gains in the analysis (I'll try 10min lengths next time => 25000 frames/ per clip; overhead is getting excessive going any shorter...) I might just have to suck up the delay.. although the risk of crashing altogether isn't helping the calculation either..
Resolve Studio 18.6.3
Ryzen 9 5950x || RTX3090 || 64Gb 3600Mhz || Intel X520 10Gbe
MacBookPro M1 Max || 32C GPU || 64Gb || QNAP T310G1S SFP+ 10Gbe
QNAP TVS-871 @ 74tb formatted R6, Mellanox MCX312b 10Gbe
Mikrotik switches & routers
Offline

Sander de Regt

  • Posts: 3591
  • Joined: Thu Nov 13, 2014 10:09 pm

Re: Limit of clip length Fusion can handle?

PostTue Oct 05, 2021 7:39 pm

Just out of curiosity. Are you saving to Prores directly? Or are you rendering seperate frames?
If it's the former, could you try the latter to see if it makes a difference? Just as an experiment, render to jpgs and see if it runs towards the end. This goes for the input as well. If you convert your video file to seperate frames first, does the issue happen then?
Sander de Regt

ShadowMaker SdR
The Netherlands
Offline
User avatar

TheBloke

  • Posts: 1905
  • Joined: Sat Nov 02, 2019 11:49 pm
  • Location: UK
  • Real Name: Tom Jobbins

Re: Limit of clip length Fusion can handle?

PostTue Oct 05, 2021 7:43 pm

Regarding the freezing/crashing, yeah I did see one instance, even in my very easy tiny-image test where it seemed to lock up during the keyframe stage. I think that was a 25k test and I eventually force closed it after waiting about 20 minutes for the keyframes to write, where in other tests that normally completed in 3-4 minutes for that number of keyframes.

Not sure what might cause that as the other tests completed OK.

footofwrath wrote:Small aside: any idea if there's a way to set the default Fusion frame scope to 'entire clip' instead of just the first 1000 frames?
Go to Preferences -> Global and Default Settings -> Defaults and set "Global range" to start/end on the desired numbers. That'll apply for any new comps created from that point onwards.

By the way, if you're using ProRes that may be coming in by default as int16 (you can check in the top right of the viewer). I'd recommend setting that down to int8 via the Loader's depth field. It shouldn't make any difference to the stabilizer accuracy, but will reduce RAM and VRAM requirements and so might be a bit quicker (in the render portion only, sadly) and possibly therefore more stable.

In terms of process, if it were me I'd set up one comp in Fusion with the desired nodes and desired settings, then duplicate it X times, changing the filename or Trim In/Out values, or whatever method you're using to do different portions of the file(s), in each successive comp. In case you haven't already noticed, .comp files are plain text Lua code and can be easily edited in a text editor.

If there were dozens or hundreds of such comps required, a script to generate them would be good. But if it's not much more than 10 or so, probably just quicker to edit them by hand.

(Maybe you're already doing that. I mention it as you were asking about defaulting the comp range, which wouldn't be an issue if each comp was copied and then edited as text).

When you say four hours for exporting h264 -> ProRes, do you mean with the Spherical Stabilizer on, after the tracking is done - the 8K render using the tracking data generated in Fusion on the 960p proxies?
Resolve Studio 17.4.3 and Fusion Studio 17.4.3 on macOS 11.6.1

Hackintosh:: X299, Intel i9-10980XE, 128GB DDR4, AMD 6900XT 16GB
Monitors: 1 x 3840x2160 & 3 x 1920x1200
Disk: 2TB NVMe + 4TB RAID0 NVMe; NAS: 36TB RAID6
BMD Speed Editor
Offline

footofwrath

  • Posts: 221
  • Joined: Sun Mar 07, 2021 2:36 pm
  • Real Name: Andrew Longhurst

Re: Limit of clip length Fusion can handle?

PostTue Oct 05, 2021 11:37 pm

TheBloke wrote:Regarding the freezing/crashing, yeah I did see one instance, even in my very easy tiny-image test where it seemed to lock up during the keyframe stage. I think that was a 25k test and I eventually force closed it after waiting about 20 minutes for the keyframes to write, where in other tests that normally completed in 3-4 minutes for that number of keyframes.

Not sure what might cause that as the other tests completed OK.


I'm beginnnnning to suspect I might have a memory stability issue. I see some random crashes in both Fusion & Resolve, and not entirely never outside of that, so I might try lowering my memory speed for a bit just to see if that settles things a little. I'm running in-spec (3600Mhz) but maybe something is not liking it. :/

TheBloke wrote: Go to Preferences -> Global and Default Settings -> Defaults and set "Global range" to start/end on the desired numbers. That'll apply for any new comps created from that point onwards.


Hmm ok. But that's still absolute count, right? I can't set it to always take 100% of the Input clip? As (at this point) I'm not sure why I would ever want to process less than the full clip..


TheBloke wrote: By the way, if you're using ProRes that may be coming in by default as int16 (you can check in the top right of the viewer). I'd recommend setting that down to int8 via the Loader's depth field. It shouldn't make any difference to the stabilizer accuracy, but will reduce RAM and VRAM requirements and so might be a bit quicker (in the render portion only, sadly) and possibly therefore more stable.



I should state I guess that I'm very novice at this point. Alllll I am doing with Fusion is running the Sph.Stab. because here it runs fast (80-90fps on a 1920x960 clip) whereas in Resolve it barely pushes 5fps (or requires a lot of messing around with Fusin clips, compound clips, correcting parser code, etc.. ) so I just run the S.S. in Fusion and then copy the node to Resolve; I don't render anything at all in Fusion at the moment.




TheBloke wrote: In terms of process, if it were me I'd set up one comp in Fusion with the desired nodes and desired settings, then duplicate it X times, changing the filename or Trim In/Out values, or whatever method you're using to do different portions of the file(s), in each successive comp. In case you haven't already noticed, .comp files are plain text Lua code and can be easily edited in a text editor.


When/if I get more competent at such things, that probably makes sense, esp. if I start getting into more than just the base S.S.; I'm investigating if there's a practical way to re-orient across the clip using tracker output, but yeah for now I'm not doing anything remotely fancy, not even Trim; I just run the whole clip.

Actually most of my runs will be like this: I film by sticking a 360 camera (or two) to a long stick and then walk/bike/drive around for a couple of hours or so. My 8K camera doesn't have good stablisation so it allllways costs me long runs of stabilising afterwards. The others have newer tech and generally don't need the same; but in either case I'm always working with 2-3hrs of 6k-8k clips which I then speed up to get the hyperlapse-type effect in 360. I just use the complete string of footage, wiping out any really bad mess-ups or camera drops or dead periods, and [am trying to start learning] object replacement to delete humans etc.. though I have issues with processing there too :D

I initially just did this with the timelapse mode but then the in-camera stablisation is almost non-existent, so seems easier just to let processing handle the unneeded frames rather than spend a lot of time re-orienting gyro outliers (still have a couple of nasty videos to fix because of this :/ )


TheBloke wrote: If there were dozens or hundreds of such comps required, a script to generate them would be good. But if it's not much more than 10 or so, probably just quicker to edit them by hand.

(Maybe you're already doing that. I mention it as you were asking about defaulting the comp range, which wouldn't be an issue if each comp was copied and then edited as text).


You're probably giving me too much credit :D When I load in a clip, it's usually 20-50k frames, but by default only the first 1000 frames are set in Trim. So I have to edit that range to the full clip length. I'm not sure any S.S. analysis would be re-usable here since each output is particular to the behaviour of the camera during that run... no?

TheBloke wrote:When you say four hours for exporting h264 -> ProRes, do you mean with the Spherical Stabilizer on, after the tracking is done - the 8K render using the tracking data generated in Fusion on the 960p proxies?


Ahh no.. I just mean from the manufacturer's stitching app. I spit Prores out of there so that all my editing is done in ProRes in Resolve/Fusion until finally rendering the completed piece in h264/h265 depending on what I plan to do with it (usually YouTube).
Though technically those apps do have their own 'supposed' stabilisers, as I mentioned the later models do work well but the 8K camera doesn't behave as well.

It still astounds me that there isn't a simple tool that can just flat-out horizon-orient a clip from start to finish, with assistance in the middle at points if there isn't sufficient horizon line to draw from. Is also why I have another thread open regarding single-object fixed tracking/reorient: for static timelapses nothing needs to move but if the camera wobbles it's not good for VR viewing. Should be a --trivial-- task to simply fix e.g. the church as centre and then stabilise every frame statically against that object.
Resolve Studio 18.6.3
Ryzen 9 5950x || RTX3090 || 64Gb 3600Mhz || Intel X520 10Gbe
MacBookPro M1 Max || 32C GPU || 64Gb || QNAP T310G1S SFP+ 10Gbe
QNAP TVS-871 @ 74tb formatted R6, Mellanox MCX312b 10Gbe
Mikrotik switches & routers
Offline
User avatar

TheBloke

  • Posts: 1905
  • Joined: Sat Nov 02, 2019 11:49 pm
  • Location: UK
  • Real Name: Tom Jobbins

Re: Limit of clip length Fusion can handle?

PostWed Oct 06, 2021 12:06 pm

footofwrath wrote:I'm beginnnnning to suspect I might have a memory stability issue. I see some random crashes in both Fusion & Resolve, and not entirely never outside of that, so I might try lowering my memory speed for a bit just to see if that settles things a little. I'm running in-spec (3600Mhz) but maybe something is not liking it. :/
Could be. But I did also see at least one freeze during this process. And crashes and freezes are sadly not all that uncommon in Fusion Studio and Resolve's Fusion page.

But yes, if you see the problem in other applications, definitely look into HW issues.

footofwrath wrote:Hmm ok. But that's still absolute count, right? I can't set it to always take 100% of the Input clip? As (at this point) I'm not sure why I would ever want to process less than the full clip..
No, you can't tell the comp to auto adjust to the Loader.

However, upon adding a Loader to a new comp and then viewing it will automatically sets the render range (the inner two of the four in/out boxes on the left of Fusion Studio's interface) to match that clip - but only if the clip is shorter than the current comp. It won't grow the comp but it will shorten the render range if the first thing you do is add a Loader and view it. (Or at least, it does it for the first loader added.)

So setting your Default to, say, 60k frames (some value larger than any clip) then making a new comp, dragging in a Loader, and viewing it, should get the render range set automatically.

EDIT: More on this in next post.

footofwrath wrote:I should state I guess that I'm very novice at this point. Alllll I am doing with Fusion is running the Sph.Stab. because here it runs fast (80-90fps on a 1920x960 clip) whereas in Resolve it barely pushes 5fps (or requires a lot of messing around with Fusin clips, compound clips, correcting parser code, etc.. ) so I just run the S.S. in Fusion and then copy the node to Resolve; I don't render anything at all in Fusion at the moment.
Yeah I know. I'm talking about speeding up the analysis process. From my earlier testing, the Track Forward ran faster on int8 footage than it did on float16. I didn't specifically test the difference between int16 and int8, but I would try setting the Loader's Depth to int8 and that should lower RAM requirements which might improve the tracking speed by a bit, and may improve stability.

footofwrath wrote:When/if I get more competent at such things, that probably makes sense, esp. if I start getting into more than just the base S.S.; I'm investigating if there's a practical way to re-orient across the clip using tracker output, but yeah for now I'm not doing anything remotely fancy, not even Trim; I just run the whole clip.
Ah OK, I had thought you had one 8 hour clip or something and then were dividing it up into 30 minute chunks in Fusion. But I guess you just have multiple 30 minute source clips.

Nonetheless, if you set up one comp then save it (with the required nodes but before you click Track in the SStab), then open that comp in a text editor, it should be pretty immediately obvious what bit of text needs to be changed to make the comp for the second clip, and the third clip, and so on.

footofwrath wrote:When I load in a clip, it's usually 20-50k frames, but by default only the first 1000 frames are set in Trim. So I have to edit that range to the full clip length. I'm not sure any S.S. analysis would be re-usable here since each output is particular to the behaviour of the camera during that run... no?
Yeah I didn't mean re-using the SStab for a different clip. Again I was talking about a quicker way to apply the tracking process for each of your many clips.

Here's a screenshot of my opening an example comp in a text editor, with highlights on the parts that would change for each input clip:
Image

So you could:
1. create and save one comp with everything set up correctly, but without running the SStab tracking yet.
2. open that in a text editor as above (ideally use a decent text editor like Visual Studio Code or Notepad++, though Notepad will work if that's all you have; don't use Wordpad or Word or any rich-text editor)
3. change the filename of the input clip, and the render range according to the length of this clip
4. Save As: comp2
5. Repeat steps 3 and 4 for every input clip.

If there's a lot of input clips this could work out faster and less tedious than re-creating the nodes in Fusion over and over. Even though it's only two nodes.

If you know coding you could write a little script in which you copy and paste the filename and it generates a comp immediately. But doing it manually in a text editor should still be faster than making the comps by hand in Fusion.

Again I'm assuming there's many of these comps - 10 or 20 or whatever. If it's just three or four, then whatever you're doing now is fine. Not worth the mental energy switching process if it's just a handful.

If all this was going to be an ongoing task, something you'll still be doing in a month or a year, there's also the option of making a Macro that combines the various nodes into one, exposing on that one node all the controls required for the whole process.

Once done, you run those comps in Fusion one by one more or less in the way you have been:
1. Open comp 1
2. Click Track Forward
3. Come back later when it's done, save and close the comp.
4. Repeat steps 1 - 3 for each of your comps
5. When you're ready to do the next step: open a finished comp, copy the SStab node, paste it into Resolve to actually do the SStab render.

Or, assuming you're not wanting to do any timeline editing/manipulation in Resolve (you're just rendering out the whole clip stabilised), you could also do the final render in Fusion Studio, by substituting the real clip for the proxy clip you tracked on and then adding a Saver node, and clicking Render. That could all be pre-setup in the template comp I described earlier, ie have two Loaders, one for the original clip, one for the proxy, with the proxy Loader connected, and also have a Saver node set up to write the final file. You'd then do the tracking & saving as I describe above. Then later when it came time to do the render you'd open the saved comp and simply swap the loader connection for the real file, then click Render. No other work required as the Saver was already set up and sitting there waiting. (Saver nodes only operate when Render is clicked.)

In fact Fusion Studio has built-in support for proxies, so it might be possible to use one Loader, set to load the original 8K clip, with the 960p clip set up as its proxy:
Image

It'd need testing to confirm that the proxy is used for the SSTab track, but if it is then that'd be a nice workflow. Then when you click Render it would automatically use the original 8K clip instead of the proxy.

footofwrath wrote:Ahh no.. I just mean from the manufacturer's stitching app. I spit Prores out of there so that all my editing is done in ProRes in Resolve/Fusion until finally rendering the completed piece in h264/h265 depending on what I plan to do with it (usually YouTube).
Oh OK, it's some kind of 3D or VR or panoramic thing? OK, I wondered why on earth it'd take 4 hours to render 30 minutes of ProRes video! Even at 8K that's super slow.
Last edited by TheBloke on Wed Oct 06, 2021 12:20 pm, edited 1 time in total.
Resolve Studio 17.4.3 and Fusion Studio 17.4.3 on macOS 11.6.1

Hackintosh:: X299, Intel i9-10980XE, 128GB DDR4, AMD 6900XT 16GB
Monitors: 1 x 3840x2160 & 3 x 1920x1200
Disk: 2TB NVMe + 4TB RAID0 NVMe; NAS: 36TB RAID6
BMD Speed Editor
Offline
User avatar

TheBloke

  • Posts: 1905
  • Joined: Sat Nov 02, 2019 11:49 pm
  • Location: UK
  • Real Name: Tom Jobbins

Re: Limit of clip length Fusion can handle?

PostWed Oct 06, 2021 12:16 pm

Oh, and re setting the render range for a Loader: I forgot that there's also right-click on any node and "Set Render Range". You can also drag the node to the timeline bar under the viewers to trigger the same thing.

Again this will only set the render range within the bounds of the comp range - it won't increase a too-short comp range. But if you start with a comp length longer than any footage, this is an effective way to quickly set the range according to the actual length of the clip.

(I guess this is the same method that is triggered automatically when a Loader is first added to a new comp - it automatically does Set Render Range that first time.)
Resolve Studio 17.4.3 and Fusion Studio 17.4.3 on macOS 11.6.1

Hackintosh:: X299, Intel i9-10980XE, 128GB DDR4, AMD 6900XT 16GB
Monitors: 1 x 3840x2160 & 3 x 1920x1200
Disk: 2TB NVMe + 4TB RAID0 NVMe; NAS: 36TB RAID6
BMD Speed Editor
Offline
User avatar

Bryan Ray

  • Posts: 2491
  • Joined: Mon Nov 28, 2016 5:32 am
  • Location: Los Angeles, CA, USA

Re: Limit of clip length Fusion can handle?

PostWed Oct 06, 2021 4:41 pm

Shift+drag a Loader to the timeline to set the global range.
Bryan Ray
http://www.bryanray.name
http://www.sidefx.com
Offline
User avatar

TheBloke

  • Posts: 1905
  • Joined: Sat Nov 02, 2019 11:49 pm
  • Location: UK
  • Real Name: Tom Jobbins

Re: Limit of clip length Fusion can handle?

PostWed Oct 06, 2021 4:45 pm

Bryan Ray wrote:Shift+drag a Loader to the timeline to set the global range.
Oh, nice!

So in cases where the Loader is longer than the previous comp/render range, looks like you need to drag it twice to both set both? Shift-drag to set comp range, but that leaves render range alone; then drag again without shift (or right-click -> Set Render Range) to set the render range.
Resolve Studio 17.4.3 and Fusion Studio 17.4.3 on macOS 11.6.1

Hackintosh:: X299, Intel i9-10980XE, 128GB DDR4, AMD 6900XT 16GB
Monitors: 1 x 3840x2160 & 3 x 1920x1200
Disk: 2TB NVMe + 4TB RAID0 NVMe; NAS: 36TB RAID6
BMD Speed Editor
Offline
User avatar

Bryan Ray

  • Posts: 2491
  • Joined: Mon Nov 28, 2016 5:32 am
  • Location: Los Angeles, CA, USA

Re: Limit of clip length Fusion can handle?

PostWed Oct 06, 2021 5:10 pm

That's right. A bit annoying—I can't think of any reason you'd want to update the global range without updating the render range immediately after, but it's at least a little less annoying than having to determine your end frame and type it in yourself.
Bryan Ray
http://www.bryanray.name
http://www.sidefx.com
Offline
User avatar

Chad Capeland

  • Posts: 3025
  • Joined: Mon Nov 10, 2014 9:40 pm

Re: Limit of clip length Fusion can handle?

PostWed Oct 06, 2021 6:42 pm

Bryan Ray wrote:I can't think of any reason


Common when you have head/tail that you need for tracking or optical flow or for off speed element footage like flames or flares or whatever. Stuff where you might need to have the global start set to -1000 for preroll without changing the render range at all.

Ergonomically, I'd like it if you dragged a tool to the middle 1/2 of the time ruler, it would set the start/end, but if you dragged it to the left 1/4, it would only set the start, and if you dragged to the right 1/4, it would set only the end.

And additional modifier, like meta or alt to set both the render range and the global range would be nice.
Chad Capeland
Indicated, LLC
www.floweffects.com
Offline

footofwrath

  • Posts: 221
  • Joined: Sun Mar 07, 2021 2:36 pm
  • Real Name: Andrew Longhurst

Re: Limit of clip length Fusion can handle?

PostMon Oct 11, 2021 8:05 pm

TheBloke wrote:
footofwrath wrote:I'm beginnnnning to suspect I might have a memory stability issue. I see some random crashes in both Fusion & Resolve, and not entirely never outside of that, so I might try lowering my memory speed for a bit just to see if that settles things a little. I'm running in-spec (3600Mhz) but maybe something is not liking it. :/
Could be. But I did also see at least one freeze during this process. And crashes and freezes are sadly not all that uncommon in Fusion Studio and Resolve's Fusion page.

But yes, if you see the problem in other applications, definitely look into HW issues.


It's hard to say since I pretty much exclusively use this machine for Resolve, Fusion & exporting the original clips from their proprietary format/stitching tool.
I don't -regularly- get crashes outside of Resolve but it -has- happened so there -might- be something to look into.


TheBloke wrote:So setting your Default to, say, 60k frames (some value larger than any clip) then making a new comp, dragging in a Loader, and viewing it, should get the render range set automatically.

EDIT: More on this in next post.


Yeah this can work. Any downside you can see to setting this to 200,000 frames? ;)


TheBloke wrote:. Ah OK, I had thought you had one 8 hour clip or something and then were dividing it up into 30 minute chunks in Fusion. But I guess you just have multiple 30 minute source clips.


Well, sometimes I have multiple 30 min clips, yes. Like right now I have started a project with 10 clips of 30mins each. They don't need stabilizing yet though. I will speed them up, and then they will need a stabilising pass, but likely the video length will be under 30mins in total by then so won't be a huge task to get it/them through Fusion.

The other camera just takes many individual 2Gb files and stitches them as a single entity. This program also doesn't finalise the exports properly (@8K) if I make them to long so right now I'm breaking these into 25-30min pieces for export also, simply for assurance.

I think I can handle just mounting the loader nodes each time.. with the Global loader range set that removes my biggest hurdle. If I get more serious and time starts meaning money then I'll revisit the pre-prepped script heh. I see a barrier where I'd still have to track down the number of frames in each clip, so not sure if that is going to result in any significant time saving as long as I have to run that manual step on each track. Unless I can also just over-estimate here?

TheBloke wrote:If all this was going to be an ongoing task, something you'll still be doing in a month or a year, there's also the option of making a Macro that combines the various nodes into one, exposing on that one node all the controls required for the whole process.


Well it's going to be a constant I think, as long as DR 17 keeps refusing to make use of the full resources of my machine in the Fusion tab. Hell even proxy media generation is taking 20mins for a 30min 6k -> 1/4-res proxy generation at present. I don't remember it taking that long before.. :/

TheBloke wrote:Oh OK, it's some kind of 3D or VR or panoramic thing? OK, I wondered why on earth it'd take 4 hours to render 30 minutes of ProRes video! Even at 8K that's super slow.


It's VR footage from consumer 6K/8K 360° cameras, yes. But Resolve doesn't know that.. nor does Fusion for that matter. But I'm assuming (perhaps naively) that the usage of the Spherical Stabiliser forces Fusion to wrap the video on the sphere at least for this activity.. and it does seem to work so it must have the correct awareness. But it's not magic either, in a couple of places I've felt the need to pass the stabliser a second time after plugging in some PanoMap keyframes.

Gotta say I feel like I'm missing a huge performance hurdle somewhere. I didn't skimp on any components and have also set all the proxies, performance optimisations etc I could find, and still I'm waiting for weeks (it feels like) for anything to happen.. Feel like I'm missing a great big "Click me for maximum sadistic pain for no good reason" switch in the program somewhere. :geek:
Resolve Studio 18.6.3
Ryzen 9 5950x || RTX3090 || 64Gb 3600Mhz || Intel X520 10Gbe
MacBookPro M1 Max || 32C GPU || 64Gb || QNAP T310G1S SFP+ 10Gbe
QNAP TVS-871 @ 74tb formatted R6, Mellanox MCX312b 10Gbe
Mikrotik switches & routers

Return to Fusion

Who is online

Users browsing this forum: No registered users and 50 guests