jsghost777 wrote: Man, I need to learn to do this. My animations are so lifeless when I look at yours!
I had the exact same problem. Even when I wanted to animate something dynamic, I would not get the camera or lights to stay where I want them. But as you can see with this little quick animation, I was able to figure it out. The key is in "target" option and using transform nodes after almost each element in the scene.
Imagine a solar system. Sun is revolving around itself. But around sun there are other planets which also revolve around themselves as well, and they each have moons that revolve around themselves and around those planets.
jsghost777 wrote: Just a random question: is there a way to take notes directly in fusion? Like where the nodes are? I HATE their underlay with a passion and don't want to use it, so I was wondering if there's a way to manually write in labels?
Instead of treating everything with one merge node and than transforming it coordinates, best is to create an element "this planet or that planet" and than add a transform 3D node. That way you can control not just transform elements of some 9 or more parameters for each planet, but also the same for the whole group of planets and that also can be animated with another transform node. So pretty soon you have 20-30 or more parameters to control motion and they don't conflict with each other. Using publish and connect to options we can easily connect the parameters to each other. Also using wireless nodes is really cool. Keeps the node tree from becoming a giant spider web.
That is also my suggestion that I use for staying less bogged down in the node tree. I use wireless nodes all that time. So I can avoid dreaded spider web action. And I find underlays slower to use myself. But I don' collaborate with others on the projects so that might be a factor for what underlays are more important or if you come back to a project many months or years later to know what you did. Some also use nodes node as suggested, like a sticky note. What frame they did something, what settings etc.
Also another way to speed things up is to group nodes. Just select them and make a group out of them. I often save my groups out as templates and use them instead of macros, since they faster to create.
Also when I use some setting in a node all the time. Some checkbox or something. You can right mouse click on the node and choose settings, set as default. As the name suggests it will stay that way next time. This speeds up process a lot.
jsghost777 wrote: I struggle with your and Bryan's suggestions (but I'm also very, very new to fusion so I'm a total noob). I think an issue I have is I'm still figuring out node order and which nodes connect where.
I know what I mean. I come from Photoshop world and layers, so I've been only periodically working in Fusion for just over a year and first few months it was frustrating as hell. Once you understand how Fusion works , its a whole other game. So I'm sure you will get there soon if you practice.
jsghost777 wrote: In terms of the map extrusion, it seems you need SVG files? I'm not able to convert my screenshot to an SVG or a good enough one that works. I've watched some tutorials on how to convert these, but they all use third party plugins that don't work for me at all. I guess if I use online maps of the world I'd get it right, but not the map I want to do it on. But in the process of failing I'm learning or accidentally discovering other things
Why do you think you need an SVG file?
First of all SVG's and Fusion don't play well. They are supported and can work, but its best to avoid them when can. There is a misconception out there in the YouTube community in particular of various youtubers using Fusion page as just a title engine for Resolve and use it for motion graphics that they have been used to in After Effects, and think the only way to work is like in After Effects so Fusion must be forced to do the same. This is by and large a myth, because they don't understand how fusion works.
First thing to understand is that Fusion is resolution agnostic or independent. Anything generated in Fusion can be scaled up like vectors to how much you need. There is no limits. So instead of thinking all needs to be vectors, people who come from After Effects don't realize that everything set up as it should is already behaving like vectors.
I can't show you now, I deleted the animation, but I tested this and you can scale up anything create in fusion to any proportion you like at any point and if done correctly its like working with vectors and even performance is good, with the exception of few nodes like blur and paint (literally a vector paint tool is default), but there are ways to make it work so that it does not kill performance.
Vector system that does exist in Fusion, so called shapes are there I think to get more in with the After Effects users, but I really don't see exclusive advantage of scaling, there is mostly performance increase with shapes , but also lots of limitations. So I would stick to just regular stuff.
The only resolutions you have to think about in Fusion are; Reference resolution, output resolution and native resolution of graphic or media elements you importin to fusion otherwise, fusion works independent of resolution. It has coordinates instead. Done properly, effects that you generate in Fusion can be made for 720p and then placed in a 4K clip and it will all scale properly with no quality loss.
Reference resolution is a resolution you have set up in your comp project settings or in your nodes, like background or transform nodes etc. Its not a fixed number that cannot be changed, its just there as the name suggests to give reference to other elements so you can have consistency when you scale things up.
In fusion your background element (equivalent to layer beneath) and foreground element (equivalent to layer below), are connected using merge node. The background element takes primacy, so resolution of merge operation will be taken from the background element. That becomes your canvas and foreground element is placed on top. But all this can be changed at any time, its just there to provide reference of the size of the canvas, as it were.
If you want to output your composition at say 720p, its good to keep underlaying background node or element at that resolution. So that you can easily get the correct aspect ratio and compose your elements to match the desired output. but if at some point you want to export at 4K, all the elements that were generated in Fusion can be easily scaled up with no quality loss to that resolution.
This brings me to only potential limitation. Media (video or images), not generated in fusion. They have native resolution at which they were brought into the fusion via media page or loader. However if you use Fusion properly it works similar to Smart Objects in Photoshop. Meaning it will retain the original resolution even when you scale it down and than back up again. As long as you don’t change resolution of the original clip on purpose using specific nodes like; Resize, Scale, Crop nodes. But transform nodes etc, will retain original source resolution.
The idea is that you don’t have to worry about resolution in native fusion tools, just the imported media resolution and as long as you bring in say 4K clip or large graphic, you can scale it down and up as you please and not lose quality, as long as you don’t go beyond native resolution. However, I’ve worked on something last night, a set of nodes and upscaling algorithm, free resources, that upscales anything 2X with abut 90% quality of many AI tools out there and what is best, playback is real time. Almost no playback penalty. So even that cab be used when you need to go beyond native resolution. I might post here examples at some point later.
Anyway, about small resolution and poor quality source image, like a screenshot of some map from a poor quality video. Its best to use some AI tools like GitapixelAI from TopazLabs or something similar. There are some free upscalers you can use, and some are better for some graphics. There is a whole community of these so its beyond the scope of this post and thread. But if you seek it, you will find it.
I might post later results of what I’m experiment with, but it should illustrate what I was writing here.