Hi,
I am skipping back to the start of the thread, because I understood it got in a "holy war" kind of situation (e.g. the faithful of AMD vs faithful of NVIDIA, .
Long time IT with some 3D graphics, video and Da Vinci (yes, I know the software uses different, but I write it the Leonardo way, and I don't mean the Ninja Turtle

) use here.
Never like the "holy wars", for IT at home I go with what I like, can get or can afford

, for work, it is a matter of analysing what the usage scenarios, though the can get, can afford and other elements like support also matter (not that support does not matter in private life, but there, for computer stuff I often ended up getting also some second hand components, which may not have any support).
Ah, notice I am considering here Da Vinci Studio, because is the one I use (got cameras and a device from BMD).
---- AMD vs Intel.
In old times, AMD tended to run a bit too hot, but should not have that problem anymore, on the other hand, there may still be occasionally software that runs better on Intel than AMD, but that depends from the specific software, and for Da Vinci, as far as I know that is not a problem, in principle.
Some AMD CPUs can support more PCIe lanes that some Intel CPU of the same class, which can impact the quantity of PCIe slots and their performance, but that depends also on the motherboard, so, you should check that in combination, if you really want to fully optmise. Otherwise, you can just keep it to "CPU and motherboard compatible ? Good" approach

.
---- RAM.
Put as much RAM as you can (afford) and your motherboard supports, faster is better, but in reality, it is better more RAM even if a bit slower, that less RAM faster. The nanoseconds for a few MHz don't count that much, the GB do.
Yes, I know there is the GPU VRAM, but without entering in details, either you always work with something that fits in the VRAM and do operations that run exclusively on the VRAM, or RAM is important for video and graphic editing (especially if you do not plan to run only one applicaiton at a time

)
--- Disks.
Depending on the money and support from the motherboard (and don't forget power and space in the case) M.2 for speed, SSD for intermediate storage speed, SSHD for a bit slower, then maybe something slower bigger capacity for backup and the like (e.g. external, though with USB3, especially 3.2, "slower" can be relative).
Of course, you can also go for a 10Gb NAS and stuff like that, or even directly for Infiniband, but that would be expensively, complex, and honestly, I knew even professional that did not have that kind of setup (if you don't know what is Infiniband, special networking, older models, 40-56Gb, newere, 100Gb).
It depends again from what you can get, can afford, and like.
M.2 with a couple of TB of space are not hugely expensive nowadays, and if you have a bit more money, even 4 TB can be reachable - but I am referring to the disk to use for Da Vinci.
If you want to also boot from M.2, fine, but keep that separate from the Da Vinci disk.
It is an old trick of doing video editing on computers, even more useful nowadays that everything in theory is multithreaded and multicore (to be seen how good are the OS and the motherboard then in taking advantage of that, but that, is a different matter).
--- GPU.
Bigger contendents, AMD and NVIDIA.
AMD corrected a bit directions in latest models, but had some years ago bet big on the blockchain/graphic/gaming side, so in general if you go for that direction, it can be a good choice, being in theory a bit cheaper. NVDIA went already many years for another direction, computing/graphics/gaming, then adding up machine learning/"deep" learning (ML/DL, what is generally sold as "AI") as a kind of evolution/special focus of the "computing" element.
E.g. in blockchain, the energy cost per income of the AMDs some years ago was better than NVIDIA (never been in blockchain, looked for curiosity some year ago).
As you can see, there is an overlap, the "graphics/gaming" part, because they both tried to focus also on that (well, gaming is a big market), in the case of NVIDIA, even integrating/leveraging to some extent its ML/DL hardware (if the software uses it, but Da Vinci does).
So, which is better, depends on what you want to do.
If you plan to use the "AI" features in Da Vinci Studio, then NVIDIA may be the better bet, depending also on the model (see below) - NVIDIA focus on computation, then "converted" to hardware for ML/DL, goes all the way to the Tesla some 14 years ago, cards that did not even have a graphic output, because were only for calculation, and the "Maximus" configuration (one graphic card for calculation and output, and one headless card purely for computation).
If it detects a NVIDIA card with CUDA, Da Vinci Studio will optimise its ML/DL ("AI") functionalities for it.
About CUDA, true, most people that talk about CUDA never had programmed and may never program using it directly, but from point of view of programming, it is not necessary to program directly in C++ (CUDA is an architecture, but in practice the CUDA Libraries available for free are used with C++, though there is also a product with FORTRAN available) to use CUDA, e.g. things like tensorflow, and python can use CUDA acceleration in a transparent way thanks to libraries.
More importantly, a lot of graphic and video software, even free open source (e.g. ffmpeg, Blender, although Blender can also use OptiX on newer NVIDIA cards, and it also support specific AMD acceleration) can use CUDA acceleration without any need for programming, and using CUDA exploits to the maximum the hardware of the NVIDIA card.
The most diffused alternative to CUDA is OpenCL.
Advantage, it is an open standard, and works with CPU, all kind of GPUs (AMD, NVIDIA, Intel), and in theory it supports out of the box even multi-CPU, multi-GPU and even mixing (multi-)CPU and (multi-)GPU.
Disadvantages, aside from university experiments, multi-GPU support require the application to be changed and specific libraries to be used, so it is difficult to find a commercial application that uses it out of the box.
That is why people talk a lot about CUDA, squeezing the maximumo out of the NVIDIA GPU - though notice though that CUDA is proprietary to NVIDIA, it will be useless (should not even install) if you do not have NVIDIA cards.
I do not know if having two AMD GPUs, Da Vinci will use both of them when encoding/decoding, for NVIDIA, I know it will, though they should be same generation or at least either both post-RTX or both before-RTX, I saw it practically when I had two of the same in the computer, and I strongly advice you get anyway at least two GPUs.
Again, ideally same generation, but it is not mandatory. Most people will tell you to get the latest three slots card RTX4090, RTX5070 (well, forecast, not yet there), something like that. My advice, don't. It is better to have two cards, even if older (professional, the old NVIDIA Quadro line) than to have one.
Reason… If you have two cards of the same generation, both can contribute to encoding/decoding etc., it does not halves the time, but anyway greatly reduces it.
But even if you have one that is RTX and one older, you can use the older one for the video, and the newer one will be fully dedicated by Da Vinci for its operations (if it sees an RTX card, it will go for it and ignore the other, or at least, that is what I have seen), thus being anyway faster that if you had one card that has to do everything.
Can also try to have two RTX and one older one, I had that, but aside the potential cost (well, second hand is what I do...

), at lest the model of Quadro RTX I have seems to tend to go rapidly "hot".
It is in part by design (I checked the specs), but still, two of those crammed with a third card - though it may also be because the ones I realised after the ones I got had been used and abused a bit

- and going for a long time, wasn't feeling too sure, so I modified the configuration.
Of course it is not 100% said you would have the problem, and if the case is big and you feel "adventurous", you could go for water colling and stuff like that - in my case I use a commercial workstation.
--- Note about power (not really for you, for other readers).
In Europe it is not unusual to have at home contracts with 3-3.5 KWh, and if you look at the specs of computers like e.g. HP workstation (so, American company), you will see that they power adapter can be rated (depending on the model) 1450W, but that in Europe in reality gives to 1700W.
Do not forget in Europe electricity is 220-240V, hence the increase, basic physics formula, W=V*A, if A stays the same, increased V means increased W, though it is limited by other factors, so it does not go twice as much, despite the jump from 110-120V to 220-240V.
Also, but that does not have the same interest for the wattage of the PSU, a frequency of 50HZ (that has an interest for you in video, it turns out it is the reason PAL is 25fps, not 30fps, and does not need to adapt to something like 24.97fps, while you need the 29.97fps in USA).
Hope this is useful