panos_mts wrote:If you track a clip with object mask and you duplicate it, object mask is not working anymore on the duplicated clip and you have to track again.
Or if you make a new version
MBP2021 M1 Max 64GB, macOS 14.4, Resolve Studio 18.6.6 build 7 Output: UltraStudio 4K Mini, Desktop Video 12.7
It seems that tracking data is stored in the cache. This is probably the reason why it does not persist when duplicating a clip or changing any parameters like scaling.
Resolve 18.6.6 Mac OSX 14.4.1 Sonoma Mac Studio Max 32GB
As much as I love Object mask, the tracking data gets lost way too easily for my taste. Object Mask is also losing its tracking data when a simple change to the pre-group is made. Hoping for a fix or an option to manually reset tracking data when necessary.
Download my 55M Advanced Luts for the Pocket 4K and 6K and UMP12K here: https://55media.net/55mluts/
It's almost necessary to generate a matte from the object mask and save it to its own file and then use that in a subsequent comp or grade if you want to guarantee that it survives but what a pain in the neck.
I saw Casey Faris demonstrate the basic idea when taking the object mask to the fusion page. He basically routed the alpha channel to the color output (on the color page) so that the clip became the matte) and then rendered-in-place to generate the matte media.You can then drag this matte media back into the color page as an external matte for the original clip.
Time Traveller Resolve Studio 19.0b1 | Fusion Studio 19.0b1 | Win 11 Pro (22H2) | i9-7940x, P4000 (536.96, 8GB VRAM), 64GB RAM, M.2 boot, SSD scratch, RAID10 data | (laptop) 16" MacBook Pro M1 MAX, 32 GPU cores, 64 GB RAM, 2 TB SSD, Sonoma 14.4.1
I wish there was an option to rebuild all tracks at once on a timeline. Resolve "forgets" the tracking data way too often for my liking, super annoying.
Download my 55M Advanced Luts for the Pocket 4K and 6K and UMP12K here: https://55media.net/55mluts/
Tom Early wrote:It's also losing all track data when I undo an operation on ANY node in the grade. Not a tool I can rely on using right now.
That's why I don't really understand the problem because if we don't touch it, the track is there, it's saved. When the project is opened again, the tracking data is there. Maybe I'm missing something? When a clip is put in a compound clip, then the magic mask is used on the clip IN the compound clip, it works (relatively) well on the main timeline.
Stumbled upon this thread because I also struggle with the Object Mask. I'm not duplicating but simply reopening the project. Even then the mask result is completely gone on the same clip.
For my current project it's not the worst because I used it on one clip. But imagine having it done on dozens. Have fun recalculating. They really need to cache that data permanent to disk and overwrite when it's parameters are changed. It's near useless otherwise.
The same goes for the Depth Map which is ridiculously heavy to calculate even on an RTX 3090. The speed is fine as long as we can store the result internally so grading becomes real-time again.
shebbe wrote:Stumbled upon this thread because I also struggle with the Object Mask. I'm not duplicating but simply reopening the project. Even then the mask result is completely gone on the same clip.
For my current project it's not the worst because I used it on one clip. But imagine having it done on dozens. Have fun recalculating. They really need to cache that data permanent to disk and overwrite when it's parameters are changed. It's near useless otherwise.
The same goes for the Depth Map which is ridiculously heavy to calculate even on an RTX 3090. The speed is fine as long as we can store the result internally so grading becomes real-time again.
I would be even fine with an option to redo all tracking data at once in a project/timeline, but right now you'd have to go through every single clip.
Download my 55M Advanced Luts for the Pocket 4K and 6K and UMP12K here: https://55media.net/55mluts/
deezid wrote: I would be even fine with an option to redo all tracking data at once in a project/timeline, but right now you'd have to go through every single clip.
I don't think it's a good idea.
My reasoning is the fact that we don't really know how the magic mask works behind the scenes, and how the "AI" do things when we track something again, with the same parameters. Does it match exactly the past track if we don't touch anything? Or does the AI interprets something different each time a track is performed (even if the difference is, 99.9% sure, minimal).
deezid wrote: I would be even fine with an option to redo all tracking data at once in a project/timeline, but right now you'd have to go through every single clip.
I don't think it's a good idea.
My reasoning is the fact that we don't really know how the magic mask works behind the scenes, and how the "AI" do things when we track something again, with the same parameters. Does it match exactly the past track if we don't touch anything? Or does the AI interprets something different each time a track is performed (even if the difference is, 99.9% sure, minimal).
This is an interesting question. In general, given same inputs are provided to ML model, its output should match exactly, because applying inference is just a bunch of pretty simple math. Result is deterministic, AI doesn’t just randomly decide to change stuff. It is possible though that at some stage some randomized input is introduced, which obviously would produce different output. My guess is though that in Resolve there is no random component and exact same input + strokes should give exactly matching output mask.
Hendrik Proosa wrote:This is an interesting question. In general, given same inputs are provided to ML model, its output should match exactly, because applying inference is just a bunch of pretty simple math. Result is deterministic
That's what I was thinking, there is no reason to see changes, but we don't really know how they are developing their tools, how it really works behind the scene, etc. And that's without counting bugs.
They don't really communicate about the limitations of their tools in general.
I'm careful with solutions looking "good enough" on the surface. Sometimes (often times), they become permanent.
Resolve 18.1 Studio, Fusion 9 Studio CPU: i7 8700, OS: Windows 10 32GB RAM, GPU: RTX3060 I'm refugee from Sony Vegas slicing video for my YouTube channels.
Hendrik Proosa wrote:This is an interesting question. In general, given same inputs are provided to ML model, its output should match exactly, because applying inference is just a bunch of pretty simple math. Result is deterministic
That's what I was thinking, there is no reason to see changes, but we don't really know how they are developing their tools, how it really works behind the scene, etc. And that's without counting bugs.
Scratched this topic a bit, it is possible that results in practice are still nondeterministic although ML libraries’ and gpu designers move towards reducing it. Causes as I understand it are related to some parallel algorithms where order of operations can change the result, and also some data type precision related issues which are not always well defined. So ideally results of applying inference should be always the same but in practice it can vary depending on used hardware, drivers, framework, library versions and whatnot.
To retain object mask data, ensure there are no nodes before it, only the source input.
So below your node tree, Create a second source input, connect it to your object mask data node and then connect that node to your node tree, layer mixer etc.
This will avoid tracking data loss because there are no nodes to adjust before that node in this approach.
SergioFermin wrote:To retain object mask data, ensure there are no nodes before it, only the source input.
So below your node tree, Create a second source input, connect it to your object mask data node and then connect that node to your node tree, layer mixer etc.
This will avoid tracking data loss because there are no nodes to adjust before that node in this approach.
Thanks for this tip! Works like a charm. You don't need a second source input though just pipe it from the main one.
SergioFermin wrote:To retain object mask data, ensure there are no nodes before it, only the source input.
So below your node tree, Create a second source input, connect it to your object mask data node and then connect that node to your node tree, layer mixer etc.
This will avoid tracking data loss because there are no nodes to adjust before that node in this approach.
Thank you Sergio Fermin. I tried to add the second source to a parallel mixer but that didn't work, so I did as follows:
I added a new source > added a blank corrector node as well as a parallel mixer > moved the node with the tracked magic mask to the beginning > connected the node with the magic mask to my new source > connected the new blank node to the old source and finally connected both the blank and magic mask nodes to the parallel mixer. It's not detrimental as far as I can see, but if someone can explain if it possibly can be, I would appreciate it.
SergioFermin wrote:Thank you Sergio Fermin. I tried to add the second source to a parallel mixer but that didn't work, so I did as follows:
I added a new source > added a blank corrector node as well as a parallel mixer > moved the node with the tracked magic mask to the beginning > connected the node with the magic mask to my new source > connected the new blank node to the old source and finally connected both the blank and magic mask nodes to the parallel mixer. It's not detrimental as far as I can see, but if someone can explain if it possibly can be, I would appreciate it.
Not sure if I fully understand what you did or wanted to achieve but this is my setup. No need for a parallel mixer. I think it's quite simple.
2022-06-16 10_51_37-Window.png (59.22 KiB) Viewed 2708 times
The magic mask just takes the source input, generates it's data and it's matte data is fed into whatever node you need it for, in my case a serial node at the end of my tree.
SergioFermin wrote:Thank you Sergio Fermin. I tried to add the second source to a parallel mixer but that didn't work, so I did as follows:
I added a new source > added a blank corrector node as well as a parallel mixer > moved the node with the tracked magic mask to the beginning > connected the node with the magic mask to my new source > connected the new blank node to the old source and finally connected both the blank and magic mask nodes to the parallel mixer. It's not detrimental as far as I can see, but if someone can explain if it possibly can be, I would appreciate it.
Not sure if I fully understand what you did or wanted to achieve but this is my setup. No need for a parallel mixer. I think it's quite simple.
2022-06-16 10_51_37-Window.png
The magic mask just takes the source input, generates it's data and it's matte data is fed into whatever node you need it for, in my case a serial node at the end of my tree.
I didn't know you can pipe it from the main one. Much Cleaner! Thank you for the example!
They added an option to re-track all clips, this works only with the clips on the current timeline, if you have nested timelines or compound clips, you have to open them one by one and re-track.
If the object mask is missing and you render the timeline, it will re-track automatically during render. The same problem here, it will ignore any object mask inside nested timelines/compound clips.