PeterDrage wrote:That is a lot of questions so here goes:-
Option 1 - Go with the Sparkle A310 ECO Single Slot
…
Option 2 - Replace 2080 Super with an Intel A770 16GB
I’m always very hesitant to put all my eggs into single basket, especially when that basket still seems to be going through growing pains, and even more when daily correct functioning and stability is of importance. So far Nvidia has shown that I can rely on it. Also, if I were to take it out that wouldn’t be quick affair as I would have to deal with its water cooling. Same if Arc started misbehaving and I had to put 2080 back in. Having Arc as an add-on would be simpler & quicker so that is the path I will be taking. With that in mind Sparkle A310 sounds like a good compromise to me, I wouldn’t have to worry about PCIe lanes behavior (like I would with other models), it is a single slot, and it is inexpensive enough to not worry about cost.
However, I’m concerned about reports of Sparkle’s maddening fan behavior. If you don’t mind I will DM you, please, with questions about that.
When it comes to Arc’s AV1 features, I’m curious how they work in dual card setup where card that can handle AV1 isn’t primary and doesn’t have monitor connected to it. I’m guessing encoding will work fine but what about decoding though? When I try to play YouTube AV1 video would Web browser use Arc to decode it and then pass that to Nvidia for display?
PeterDrage wrote:ReBAR can be a benefit / hindrance in games depending on the Game but I have not seen or heard anything about DaVinci having an issue with it.
I couldn’t find much about ReBAR’s impact on Arc’s performance outside of games.
https://www.reddit.com/r/IntelArc/comme ... resizable/ indicates it results in significant difference even for other things, like encoding, and
viewtopic.php?f=21&t=190594&p=1004782 indicates ReBAR _might_ improve performance of Resolve too, so naturally I would like to have it on if possible, but later post also indicates Above 4G required for ReBAR _might_ cause instability.
I don’t see possibility of Resolve itself having an issue with ReBAR because that interaction happens on system level, I am concerned whether my system will handle it correctly because a) manufacturer seemed to add support for it in a rush, and b) to have ReBAR one needs to enable Above 4G Decoding, which in turn makes some of other devices too use area above 4G and one doesn’t have control over which ones will so if one of them is incompatible that could introduce stability/performance issue elsewhere. I guess there is only one way to find out will that happen to me.
PeterDrage wrote:My 10980XE has 18C/36T all running at 4.8GHz
Do you find 10980XE’s extra cores make a significant difference in Resolve? When I was making purchase decision Puget reviews were indicating at best 10% better performance in Resolve over 10920X. When I watch Task Manager I perceive two CPU pain spots:
1. There still seem to be good chunks of code that are single-threaded so only way to significantly improve their performance is by using CPU that has significantly better single core performance which upgrading to 10980XE wouldn’t bring me.
2. Node caching is where I wait most and during that time all cores are maxed out. Significantly more cores might improve that IF code scales up well. How well 10980XE handles such tasks, for example when full resolution DNxHR HQX is used for node caching of 4K timeline?
In other words, I’m not sure would I benefit from upgrading CPU to 10980XE. If you know the answer I would appreciate it.
PeterDrage wrote:… whilst new platforms are running at 6GHz they have less cores and way less PCIE Lanes. I have used all 48 CPU PCIE lanes …
x299 is an undervalued and misunderstood platform. The new HEDT platform from Intel and AMD are way over by budget.
I feel the same way. IMO Intel’s current desktop platform covers corporate, casual, and gamer users but is severely limiting for power users that need workstation with plenty of expandability. I feel later group runs fast out of even what X299 offers so I kept having high hopes for what would successor platform bring but when, after a long wait, W790 and CPUs for it finally arrived and I saw the prices my jaw hit the floor and stayed there, at such prices it is practically same to me as if they don’t exist at all. Thanks to that I am for the first time seriously considering switching to AMD once I am forced to upgrade (assuming I could afford it). In the meantime, I am trying to extend life of my X299 setup as much as I can.
Unfortunately, my X299 mb has, unlike yours, 44 CPU PCIe lanes. Which motherboard you are using, please?
PeterDrage wrote:One for the best tweaks I made was to the File System Cluster Size of the volumes, for example optimising it for Video Storage. From my Internal 6 x 20TB HDD Archive Array, I am now getting 1GB/s Read and Write sustained.
When I was trying to pick optimal cluster size my effort resulted in one step forward one step back. Going from 4K to 64K resulted in sustained speeds going up in benchmarks but I couldn’t find it making any practical difference in Resolve. I’m curious which value you selected as optimal cluster size, how you arrived to it, and did it make a visible difference in Resolve, please?
Thank you again for very thorough and helpful input, it is very much appreciated!