Anyone running Resolve using RTX 2080ti already?

Get answers to your questions about color grading, editing and finishing with DaVinci Resolve.
  • Author
  • Message
Offline
User avatar

Jed Mitchell

  • Posts: 165
  • Joined: Tue Nov 03, 2015 11:04 pm
  • Location: New York, NY

Re: Anyone running Resolve using RTX 2080ti already?

PostWed Jan 02, 2019 5:53 pm

Piotr Wozniacki wrote:As long as RT cores and Tensor cores of Turing cards don't do anything special in Resolve, I guess that with 3x Titan Xp I'd get similar performance to 2x 2080 ti cards - or am I too optimistic?


After doing some more performance testing on the 2080 Ti and comparing it with some other numbers out there (and against my 2x 1080 Ti system) I'd actually expect equal or better performance with even 2x Titan Xp than 2x 2080 Ti, and 3x Titan Xp should be much faster.

I was starting a build from scratch so getting the 2080 Ti seemed like an acceptable gamble, but if I was upgrading from 2x 1080 Ti or (especially) 2x Titan Xp I'd wait until there was some use for the RT cores to make a qualitative difference. The VRAM size on the 2080 Ti is a major limiting factor too, I'd go with a single RTX Titan if I was looking at the price range of 2x 2080 Ti.
"It's amazing what you can do when you don't know you can't do it."


Systems:
R16.2.3 | Win10 | i9 7940X | 128GB RAM | 1x RTX Titan | 960Pro cache disk
R16.2.3 | Win10 | i9 7940X | 128GB RAM | 1x 2080 Ti | 660p cache disk
Offline

MishaEngel

  • Posts: 1432
  • Joined: Wed Aug 29, 2018 12:18 am
  • Real Name: Misha Engel

Re: Anyone running Resolve using RTX 2080ti already?

PostThu Jan 03, 2019 2:17 am

Piotr Wozniacki wrote:Hmm...for thermal reasons, I have also been considering buying 3x 2080 ti of the blower type, but only after having succeeded selling my current 2x Titan Xp. I start wondering whether - instead of selling my Titans cheap, and buying the 2080 tis expensive - I should buy a used Titan Xp as my 3rd GPU... As long as RT cores and Tensor cores of Turing cards don't do anything special in Resolve, I guess that with 3x Titan Xp I'd get similar performance to 2x 2080 ti cards - or am I too optimistic?

Piotr


A third GPU which is equal or faster than your current Titan Xp's will make your system as fast as with 3 Titan Xp's.

To speed things up a little more(in windows) look here:



here https://level1techs.com/article/unlocking-2990wx-less-numa-aware-apps

and here https://bitsum.com/portfolio/coreprio/
Offline
User avatar

Cary Knoop

  • Posts: 1437
  • Joined: Sun Mar 12, 2017 6:35 pm
  • Location: Newark, CA USA

Re: Anyone running Resolve using RTX 2080ti already?

PostThu Jan 03, 2019 2:45 am

MishaEngel wrote:To speed things up a little more(in windows) look here:



here https://level1techs.com/article/unlocking-2990wx-less-numa-aware-apps

and here https://bitsum.com/portfolio/coreprio/

The cat is slowly coming out of the bag on this.
The eyes should now be on Microsoft!
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostThu Jan 03, 2019 6:26 am

MishaEngel wrote:
A third GPU which is equal or faster than your current Titan Xp's will make your system as fast as with 3 Titan Xp's.


That's what I said, Mischa. I'm just considering adding a used 3rd Titan Xp to my current system because selling my Titans only to get 3x 2080ti instead would be too high a waste of money with the prices people offer me for them.

Oh - and what you might have misunderstood in my previous post could be the question rather than statement, that hopefully my system with 3x Titan Xp would not be much slower than with 2x 280ti. Do you agree with that?

Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostThu Jan 03, 2019 7:50 am

Cary Knoop wrote:
MishaEngel wrote:To speed things up a little more(in windows) look here:



here https://level1techs.com/article/unlocking-2990wx-less-numa-aware-apps

and here https://bitsum.com/portfolio/coreprio/

The cat is slowly coming out of the bag on this.
The eyes should now be on Microsoft!


Running Coreprio, I somehow do not see much of a difference in my 2990WX performance; not in Resolve at least...

Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostThu Jan 03, 2019 10:19 am

Here are some results of the Prime 95 benchmark (all benchmark parameters default; vanilla hardware settings as well as those of Ryzen Master; the 2304K FFT length portion selected arbitrarily):

1. W/o CorePrio running:
Code: Select all
Timings for 2304K FFT length (32 cores, 1 worker):  2.15 ms.  Throughput: 464.06 iter/sec.
Timings for 2304K FFT length (32 cores, 8 workers):  8.51,  7.95,  8.52,  7.12, 10.42, 10.09, 10.71, 10.49 ms.  Throughput: 884.85 iter/sec.
Timings for 2304K FFT length (32 cores, 32 workers): 32.13, 35.52, 32.25, 31.60, 31.73, 31.71, 35.28, 34.44, 37.09, 36.85, 31.59, 29.70, 30.16, 30.64, 35.07, 31.57, 50.69, 50.93, 40.55, 46.59, 60.59, 49.36, 61.24, 60.89, 54.16, 56.31, 53.50, 55.36, 52.07, 52.04, 59.91, 50.88 ms.  Throughput: 790.51 iter/sec.
Timings for 2304K FFT length (32 cores hyperthreaded, 1 worker):  2.95 ms.  Throughput: 338.88 iter/sec.
Timings for 2304K FFT length (32 cores hyperthreaded, 8 workers):  9.72, 10.17,  9.83,  9.79, 11.08, 10.45, 10.60, 11.91 ms.  Throughput: 769.36 iter/sec.
Timings for 2304K FFT length (32 cores hyperthreaded, 32 workers): 41.13, 40.28, 38.02, 38.43, 40.35, 37.71, 36.50, 36.31, 39.26, 39.44, 37.71, 38.38, 38.64, 37.50, 37.43, 36.64, 46.14, 46.46, 46.10, 46.60, 46.98, 46.77, 44.73, 45.17, 46.72, 46.11, 47.54, 45.27, 48.15, 47.94, 48.96, 48.39 ms.  Throughput: 760.10 iter/sec.


2. Results with CorePrio running with default settings:
Code: Select all
Timings for 2304K FFT length (32 cores, 1 worker):  1.94 ms.  Throughput: 516.16 iter/sec.
Timings for 2304K FFT length (32 cores, 8 workers): 10.05,  9.98, 10.18,  9.93, 10.35, 10.64, 10.73, 10.34 ms.  Throughput: 779.15 iter/sec.
Timings for 2304K FFT length (32 cores, 32 workers): 37.51, 38.43, 37.78, 38.66, 38.22, 39.83, 37.26, 37.36, 42.17, 43.10, 42.55, 43.67, 41.21, 42.69, 40.33, 42.28, 44.09, 33.77, 45.05, 44.81, 43.30, 36.39, 38.55, 38.30, 46.28, 39.95, 45.99, 45.07, 36.47, 39.15, 40.15, 44.16 ms.  Throughput: 789.88 iter/sec.
Timings for 2304K FFT length (32 cores hyperthreaded, 1 worker):  2.99 ms.  Throughput: 333.93 iter/sec.
Timings for 2304K FFT length (32 cores hyperthreaded, 8 workers): 11.27, 11.45, 11.17, 11.13,  9.46,  9.12,  9.19,  9.45 ms.  Throughput: 785.50 iter/sec.
Timings for 2304K FFT length (32 cores hyperthreaded, 32 workers): 45.90, 43.68, 46.03, 43.71, 46.34, 42.61, 45.92, 45.70, 49.89, 49.09, 50.66, 39.03, 47.35, 38.33, 43.59, 39.14, 40.14, 37.70, 36.41, 44.99, 41.03, 42.38, 40.64, 42.90, 48.51, 40.21, 40.36, 40.24, 45.27, 49.75, 32.87, 41.65 ms.  Throughput: 748.21 iter/sec.


So, a very slight throughput increases are indeed visible, but maily for a single worker. I need to look at it closer - especially repeat with Ryzen Master set to Creator Mode (at least), and perhaps some CorePrio's parameter change.

Also - although HWInfo which I'm solely using as my system monitor can't say whether or not any given core throttled (like with Intel CPUs) - this has also been a conservative test from this point of view, as I never let my CPU cool down completely after I stopped the first benchmark (without Core Prio enabled), so a slight possibility exists my 2990WX was at the edge of throttling in the second benchmark (the one with CorePrio enabled)...

And last but not least: to draw any conclusions, some Excel work on larger outputs in both scenarios is required; this has just been a quick and dirty sneak peak :)

Piotr

PS. I forgot to state that one thing this little application changes for sure: instead of showing all 32 cores at 100% load and dark blue in Task Manager most of the time, with CorePrio it does it only in the "32 cores, 32 workers" test!

And here is the same benchmark with Dynamic Local Mode disabled, as it should for CorePrio's own Dynamic Local Mode disabled as it should:
Code: Select all
Timings for 2048K FFT length (32 cores, 1 worker):  2.74 ms.  Throughput: 365.15 iter/sec.
Timings for 2048K FFT length (32 cores, 8 workers):  6.16,  6.17,  5.96,  5.84, 10.02, 10.07,  9.71,  9.75 ms.  Throughput: 1068.13 iter/sec.
Timings for 2048K FFT length (32 cores, 32 workers): 27.94, 26.90, 28.50, 28.14, 28.80, 27.51, 28.03, 28.15, 26.45, 26.64, 26.88, 27.51, 26.56, 26.12, 26.49, 27.48, 50.98, 50.56, 52.79, 48.59, 50.50, 47.73, 51.07, 53.98, 49.38, 50.70, 49.49, 49.87, 50.18, 49.48, 50.05, 50.50 ms.  Throughput: 902.78 iter/sec.
Timings for 2048K FFT length (32 cores hyperthreaded, 1 worker):  2.89 ms.  Throughput: 346.00 iter/sec.
Timings for 2048K FFT length (32 cores hyperthreaded, 8 workers):  6.57,  6.47,  6.37,  6.30, 10.62, 11.16, 10.68, 10.70 ms.  Throughput: 993.29 iter/sec.
Timings for 2048K FFT length (32 cores hyperthreaded, 32 workers): 27.79, 24.90, 33.37, 25.01, 29.72, 25.72, 28.25, 25.54, 27.64, 25.94, 26.46, 25.68, 26.73, 25.60, 26.79, 26.10, 50.09, 48.69, 57.51, 50.57, 50.16, 47.68, 60.40, 49.63, 50.09, 49.49, 50.69, 50.50, 49.45, 49.23, 51.98, 50.22 ms.  Throughput: 911.44 iter/sec.


So now, for 32 cores hyperthreaded, 32 workers we have throughputs like this:
760.10 iter/sec
748.21 iter/sec
911.44 iter/sec - meaning also 32 workers get faster with CorePrio!
Last edited by Piotr Wozniacki on Thu Jan 03, 2019 1:25 pm, edited 5 times in total.
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostThu Jan 03, 2019 10:45 am

Piotr Wozniacki wrote:
MishaEngel wrote:
A third GPU which is equal or faster than your current Titan Xp's will make your system as fast as with 3 Titan Xp's.


That's what I said, Mischa. I'm just considering adding a used 3rd Titan Xp to my current system because selling my Titans only to get 3x 2080ti instead would be too high a waste of money with the prices people offer me for them.

Oh - and what you might have misunderstood in my previous post could be the question rather than statement, that hopefully my system with 3x Titan Xp would not be much slower than with 2x RTX 2080ti. Do you agree with that?

Piotr

Since the theory behind CorePrio utility might actually have something to it, my hopes for having a really fast system one day have increased again. In this context - please share your opinion on the rhetoric question in my own quote above (I bolded it out now). I wouldn't like my GPUs to become real bottleneck if some beautiful a patch from M$ (or more probably - an utility from some enthusiast gurus) unleashes the true potential of my TR :) TIA

Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostThu Jan 03, 2019 12:09 pm

And speaking of Microsoft in the context of the 2990WX NUMA, here's and interesting bit from tom's Hardware:

"AMD continues working with Microsoft to route threads to the die with direct-attached memory first, and then spill remaining threads over to the compute dies. Unfortunately, the scheduler currently treats all dies as equal, operating in Round Robin mode. As a result, even moderately-threaded applications can suffer at the hands of high memory latency and low throughput. This is further complicated by thread migration. According to AMD, Microsoft has not committed to a timeline for updating its scheduler."

Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline

Carsten Sellberg

  • Posts: 1463
  • Joined: Fri Jun 16, 2017 9:13 am

Re: Anyone running Resolve using RTX 2080ti already?

PostThu Jan 03, 2019 12:56 pm

Cary Knoop wrote: The cat is slowly coming out of the bag on this.
The eyes should now be on Microsoft!


Hi.

3 months ago, I wrote this about the AMD 2990wx:

QUOTE 'My personal feeling was, that there is a problem in Windows, as some few Linux programs scaled much better. But only time can tell.'

From this link:

viewtopic.php?f=21&t=80200

Intel promised to deliver the 28 cores 56 threads Xeon W-3175X in 2018. But Intel is again delayed.
When we see the 28 cores Xeon W-3175X, will it be nice to know, how IT will handle programs with max numbers of threads?
Will it have the same problems as the AMD 2990wx?

Regards Carsten.
URSA Mini 4.6K
Offline

MishaEngel

  • Posts: 1432
  • Joined: Wed Aug 29, 2018 12:18 am
  • Real Name: Misha Engel

Re: Anyone running Resolve using RTX 2080ti already?

PostThu Jan 03, 2019 1:44 pm

Piotr Wozniacki wrote:
MishaEngel wrote:
A third GPU which is equal or faster than your current Titan Xp's will make your system as fast as with 3 Titan Xp's.


That's what I said, Mischa. I'm just considering adding a used 3rd Titan Xp to my current system because selling my Titans only to get 3x 2080ti instead would be too high a waste of money with the prices people offer me for them.

Oh - and what you might have misunderstood in my previous post could be the question rather than statement, that hopefully my system with 3x Titan Xp would not be much slower than with 2x 280ti. Do you agree with that?

Piotr


3 Titan Xp's are faster than 2 RTX2080ti's for sure. On the otherhand you already have a super fast system. I would rather spend my money on lenses, light's, etc..

What kind of playback rates do you get with RED 8k.R3D files?
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostFri Jan 04, 2019 7:54 am

MishaEngel wrote:3 Titan Xp's are faster than 2 RTX2080ti's for sure. On the otherhand you already have a super fast system. I would rather spend my money on lenses, light's, etc..

Misha - this is important to me - could you elaborate on the above statement, please? Even assuming nothing very important is going to be implemented in Resolve which will heavily depend on the Turing new types of cores, I thought there was a consensus that the RTX 2080ti is indeed some 25% faster than my Titan Xp. So If I do sell my 2x Titans Xp now before they lose even more value, I'd be able to buy 2x RTX 2080tis for starters - and once BMD does introduce new RT core-based functionalities, I'd buy a 3rd one. While - if I keep my current cards, and even add a 3rd one (Titan Xp, used of course) - I will have this path closed for ever, as I expect my income to drastically drop starting next year. So as you can see, this is a real dilemma!

As to my shooting equipment - well, as a one man band I think I'm set for a long time to come (FS7, Inferno, A7Rii and GH5 with Metabones adapter and many Canon EF lenses as well as several native ones for both E and MFT mounts). I have only some basic lighting set - but shooting alone I'm not able to handle more anyway (for my classical music multi-camera projects, I team up with a larger production group anyway - with sound guys, camera assistants and lighting crew).

So could you please share the source you claims are based on? TIA,
Piotr
Last edited by Piotr Wozniacki on Fri Jan 04, 2019 4:29 pm, edited 1 time in total.
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline

MishaEngel

  • Posts: 1432
  • Joined: Wed Aug 29, 2018 12:18 am
  • Real Name: Misha Engel

Re: Anyone running Resolve using RTX 2080ti already?

PostFri Jan 04, 2019 3:48 pm

Don't buy on promises from manufatures.
Never pre-order.

At the moment RTX brings nothing for Resolve, be happy with your system.
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostFri Jan 04, 2019 4:33 pm

MishaEngel wrote:Don't buy on promises from manufatures.
Never pre-order.
At the moment RTX brings nothing for Resolve, be happy with your system.


While I agree with those rules in general, I thought Puget concluded their Resolve-specific tests stating that the Turing potential advantages aside, the RTX 2080ti's raw speed is some 25% higher than that of the Titan Xp...

Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline

MishaEngel

  • Posts: 1432
  • Joined: Wed Aug 29, 2018 12:18 am
  • Real Name: Misha Engel

Re: Anyone running Resolve using RTX 2080ti already?

PostFri Jan 04, 2019 5:06 pm

Piotr Wozniacki wrote:
MishaEngel wrote:Don't buy on promises from manufatures.
Never pre-order.
At the moment RTX brings nothing for Resolve, be happy with your system.


While I agree with those rules in general, I thought Puget concluded their Resolve-specific tests stating that the Turing potential advantages aside, the RTX 2080ti's raw speed is some 25% higher than that of the Titan Xp...

Piotr


16,49 = 16 and 19,51 = 20 in Puget language.

20/16=1.25 hence 25% faster in Puget language
19,51/16,49=1,18 hence 18% faster my language

Also puget tests with short files, 1 minute or shorter (so only at turbo speeds).
RTX2080ti turbo 1635MHz x 4352cores x 0.002=14.231 Tflops fp32
TitanXp turbo 1582MHz x 3840cores x 0.002=12.150 Tflops fp32

Real turbo difference is ~ 17.13%

Sustained speed(with base clock and long renders, NoiseReduction, effects, etc).
RTX2080ti base 1350MHZ x 4352cores x 0.002=11.750 Tflops fp32
TitanXp base 1480MHz x 3840cores x 0.002=11.366 Tflops fp32

Real base(sustained speed with real footage) difference is 3.38%

And the titanXp has 12 GB of VRAM and the RTX only 11 GB of VRAM.

Puget wants to sell(they are a commercial company) and are not an independent review site (hence colored information without strictly lying).
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostFri Jan 04, 2019 6:34 pm

Thanks for this enlightening analysis and your time, Misha :)

Piotr

PS. The RTX 2080ti models which I'm thinking of is either the Aorus from Gigabyte, or Blower from Zotac. Both are factory-overclocked, but have the "blower" cooling design (just like the Titans, and unlike the FE nVidia or any other of those many other models from different manufacturers which all have 2-3 fans and suffer from the "higher-shelf" card(s) throttling problem (both Gigabyte Aorus and Zotac blower cards expels the hot air through the back of the PC case). Also - even though I do not 6 or 8K with my own gear (my highest resolutions being 4k or UHD at 50p in Sony's XAVC-I flavor of H.264) - my classical music projects are usually up to 10 cameras, and when I do it with ad-hoc assembled teams one of more cameras may happen to be capable of up to 8K, and RAW (my own RAW possibilities are limited to the 4K@25p CDNG on my Shogun Inferno). And with so many cameras, often recording different formats (and I will always use one or two with the highest resolution if available, as this gives me great possibilities of zooming in. panning etc. for creative purposes) - editing material with so many different angles and close-ups of say a classic guitar player's fingers puts extreme requirements on the entire editing system speed and "snappiness" to maintain the A-V sync with millisecond precision (just one of many examples)... I've been doing that for many years, so believe me - I know what I'm talking about; the usual "1 frame precision" of the sync is not enough... And this is my main reason for the fastest possible system.
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostSun Jan 06, 2019 2:32 pm

Piotr Wozniacki wrote:And here is the same benchmark with Dynamic Local Mode disabled, as it should for CorePrio's own Dynamic Local Mode disabled as it should:
Code: Select all
Timings for 2048K FFT length (32 cores, 1 worker):  2.74 ms.  Throughput: 365.15 iter/sec.
Timings for 2048K FFT length (32 cores, 8 workers):  6.16,  6.17,  5.96,  5.84, 10.02, 10.07,  9.71,  9.75 ms.  Throughput: 1068.13 iter/sec.
Timings for 2048K FFT length (32 cores, 32 workers): 27.94, 26.90, 28.50, 28.14, 28.80, 27.51, 28.03, 28.15, 26.45, 26.64, 26.88, 27.51, 26.56, 26.12, 26.49, 27.48, 50.98, 50.56, 52.79, 48.59, 50.50, 47.73, 51.07, 53.98, 49.38, 50.70, 49.49, 49.87, 50.18, 49.48, 50.05, 50.50 ms.  Throughput: 902.78 iter/sec.
Timings for 2048K FFT length (32 cores hyperthreaded, 1 worker):  2.89 ms.  Throughput: 346.00 iter/sec.
Timings for 2048K FFT length (32 cores hyperthreaded, 8 workers):  6.57,  6.47,  6.37,  6.30, 10.62, 11.16, 10.68, 10.70 ms.  Throughput: 993.29 iter/sec.
Timings for 2048K FFT length (32 cores hyperthreaded, 32 workers): 27.79, 24.90, 33.37, 25.01, 29.72, 25.72, 28.25, 25.54, 27.64, 25.94, 26.46, 25.68, 26.73, 25.60, 26.79, 26.10, 50.09, 48.69, 57.51, 50.57, 50.16, 47.68, 60.40, 49.63, 50.09, 49.49, 50.69, 50.50, 49.45, 49.23, 51.98, 50.22 ms.  Throughput: 911.44 iter/sec.


So now, for 32 cores hyperthreaded, 32 workers we have throughputs like this:
760.10 iter/sec (no CorePrio; DLM running as implemented by AMD in Ryzen Pro)
748.21 iter/sec (CorePrio running but without disabling DLM first - which is not recommenced according to the Author; it may explain the little regression)
911.44 iter/sec - (CorePrio running with DLM off; quite obviously higher throughputalso noticeable with all 32 workers getting faster than with AMD implementation of DLM by almost 20% )


Of course, nothing conclusive in the Resolve context - but at least a small change in the right direction. I contused my quick and dirty tests a bit longer, but unfortunately no more good news for now:

1. The MSI version of CPU-Z bench(whatever it actually does):

CorePrio on Ryzen Master in Creator Mode - AMD DLM ON.PNG
CorePrio on Ryzen Master in Creator Mode - AMD DLM ON.PNG (29.37 KiB) Viewed 8005 times

CorePrio after reboot.PNG
CorePrio after reboot.PNG (30.03 KiB) Viewed 7998 times


Interestingly, 178%-187% over i9-7980XE has absolutely no pattern to it; I've seen what looked like random numbers (in this range) absolutely regardless of CorePrio running or not, or of Ryzen Master's current mode. What's more, while yesterday I tended to be getting the 182% or 183% numbers most of the time, today I didn't seem to be able get past 178% - until I... rebooted my machine! Bang - and 187% over the reference i9 again. So really, nothing to even speculate about.

2. But the most frustrating to me have been trying to find the sweet-spot for my scenario of Moldflow background analyses / foreground (interactive) Resolve sessions' in dependency on the combined Ryzen Master / CorePrio settings. As the general idea, this scenario of getting the most from the huge amount of threads on the 2990WX has more or less been working for me since day one in that I can run even a heavy analysis in the background, and edit or grade in Resolve at the same time without any noticeable impact on my fps - so yeah, the 2990WX makes my workstation work as two separate ones (and fast ones, too). However - no matter how I try - I cannot find any pattern to the influence of the CorePrio (or DLM in general) setting on the actual Moldflow analysis speed. And considering that a single analysis can require anything from several minutes to several days to complete - it is of paramount importance for me to accelerate them as much as only possible... Adding to it is the fact that - before actually launching any kind of a Moldflow analysis - a user can choose from among the following scaling (thread affinity) scenarios:

- automatic (where Moldflow decides how many cores to use and when)
- maximum thread number (32 in the case of 2990WX CPU)
- single thread (never used)
- a certain predefined number of threads (anything between 1 and 64 in the case of my CPU)

It then becomes obvious that- together with even a little help from tools like CorePrio - it would be great to find the optimum number of threads and stick to it, so that the analysis runs quickly while not stealing too much cycles from Resolve. It goes without saying that while I can wait for any analysis to complete a little longer, I absolutely require fluent editing/grading experience in Resolve....

And this is yet another reason why even slight differences in GPU subsystem speed is of such a great importance to me!


Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline

Carsten Sellberg

  • Posts: 1463
  • Joined: Fri Jun 16, 2017 9:13 am

Re: Anyone running Resolve using RTX 2080ti already?

PostSun Jan 06, 2019 4:48 pm

Hi Piotr.

Thank for confirming my understanding of the 2990WX.

You write
QUOTE: 'can run even a heavy analysis in the background, and edit or grade in Resolve at the same time without any noticeable impact on my fps'

I remember an old post, where you come with a suggesten, from Thu Aug 23, 2018 12:32 pm: 'I wonder if BMD could make possible several export render jobs to be run simultaneously in future version?'

viewtopic.php?f=21&t=77989

So I wonder, if it is possible to run two Resolves simultaneously?
May be as two separate users?
And try out what kind of Resolve jobs you can run simultaneously. A export render can may be one of them.

I have never tried Resolve with two plug in Graphics Cards. So I don't know the possibilities.
But do any expect it will be possible to run two different users and set it up to use two different Graphics Cards?
Is it a good idea. And what is needed?

And do you expect any gain in total performance compared to only one user with a DUAL Graphics Cards setup.

Regards Carsten.
URSA Mini 4.6K
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostSun Jan 06, 2019 5:25 pm

Hi Carsten,

I never tried it (perhaps will) - but even if there is a way to run more than one instance of Resolve, the heavy dependence of the efficiency of each instance on the GPU subsystem makes simultaneous export of more than a single file a rather sub-optimal way to take advantage of the 2990WX'es plethora of cores/threads, I'm afraid...

On the other hand, the beauty of my usage scenario is that one of the number-crunching applications (Moldflow CAE system in this case) doesn't use the graphics subsystem at all, so it can be owned and managed by Resolve in the usual way - without any extra overhead or dividing the graphical subsystem in two parts - one for Resolve, the other for Moldflow... I've been working like this ever since I finished building my Threadripper workstation, and had zero problems with it! Yes - it's true that in the earlier years, Moldfllow's solvers not only scaled across the CPU's threads, but could also use CUDA cores of an nVidia, or OpenCL in the case of an AMD GPU card. But this has never been properly implemented, and so the nVidia Fermi architecture was the last one used by Moldflow for this type of CPU/GPU parallelization. BTW it never worked really well, so I'm glad Moldflow developers dropped GPU acceleration altogether.

In fact, what I do is setting up my Moldflow projects on an average laptop (the same one I would take with me to my Moldflow clients); I am accessing the laptop and the Moldflow GUI (pre- and post-processor for the FEM model and analysis results) from my main 2990WX workstation using Remote Desktop. After my FEM model is ready, I'll simply launch the solver on the workstation rather than the slowish laptop; for as long as the analysis lasts I can then minimize my Remote Desktop and with the Moldflow analysis running in the background, I start my usual interactive Resolve session... Works like a charm - but of course, I'd like to squeeze every little bit of computing power out of such a setup :)

That said, of course I do not always work like this; facing a complicated Moldflow problem with the part and mold models several million 3D elements each, I will never share my WS resources (and my own cramped CPU I have in my head :)) with anything else, not even mention Resolve. And vice-versa: when working on sub-frame A/V sync of a multicamera music video, I'll need each and every CPU and GPU core to do it properly in Resolve (not to mention RAM, of which only having 64GB I couldn't possibly accommodate two big projects in memory without disk trashing. Another example of my Resolve pursuits which I'd never even try to conduct with a Moldflow analysis running in the background is any project involving Fusion....But with all light-weight projects of both CAE and NLE nature, this is the way I'm tackling the 2990WX architectural quirks and sometimes unforeseeable behavior... Sometimes I'd even run more than one Moldflow analyses in the background, if for instance all I need to do in Resolve is some media management, timeline housekeeping and introductory cutting!

Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Christopher Dobey

  • Posts: 234
  • Joined: Fri Jul 18, 2014 5:58 pm
  • Location: San Francisco Bay Area, California

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Jan 21, 2019 5:39 am

my findings:

RTX 2080 Ti renders ~33% faster than the WX9100.

Timeline: 4.6K RAW 4:1 60p
Only RAW adjustments and light Temporal Noise Reduction.

I only benchmarked the above settings as its my personal workflow most of the time.
See sig for computer specs. The NVIDIA was in bootcamp and the AMD in macOS as I could not get the AMD working correctly in Windows. Both in the Sonnet Tech eGPU 650.
Nikon Z6 | Ninja V ProRes RAW
Resolve Studio 18 Beta | macOS 12.3.1
Apple M1 Max MacBook Pro, 10 core CPU, 32-core GPU, 64GB memory
Pro Display XDR | Micro Panel
Offline

MishaEngel

  • Posts: 1432
  • Joined: Wed Aug 29, 2018 12:18 am
  • Real Name: Misha Engel

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Jan 21, 2019 2:49 pm

Christopher Dobey wrote:my findings:

RTX 2080 Ti renders ~33% faster than the WX9100.

Timeline: 4.6K RAW 4:1 60p
Only RAW adjustments and light Temporal Noise Reduction.

I only benchmarked the above settings as its my personal workflow most of the time.
See sig for computer specs. The NVIDIA was in bootcamp and the AMD in macOS as I could not get the AMD working correctly in Windows. Both in the Sonnet Tech eGPU 650.


That is good news for the one looking for a new and faster GPU. The Radeon7/VEGA2, it is about 27% faster than the WX9100 (so around 6% behind the RTX2080ti) has 16 GB of VRAM and only costs $699 where the RTX2080ti is $1199 and only has 11 GB(out of memory problems with effects and NR). And the VEGA2 works in mojave for the apple users.
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Feb 25, 2019 12:53 pm

Dear Friends,

Please forgive me being stubborn like this and bringing the OP of this thread back again. As some of you may remember, I've this dilemma for quite a long time; to recap it shortly - in order to minimize bottleneck I witness sometimes with my 2 Titan Xp GPU subsystem not keeping pace with my TR 2990WX CPU, should I:

- sell both Titans asap (so they still have some resale value), and replace them with 2 or even 3 GTX 2080ti cards (apart from pure speed, this would also give me those RT and Tensor cores)
- buy a 3rd Titan Xp (probably a used one)
- do nothing, and leave my GPU subsystem as is :)

Where is the last, "minimalist" option coming from, you may ask? Well - from a very knowledgeable person rising the sensible argument recently that - since the only PCIe slot left free on my mobo is an x8@x16 one (forcing the 3rd GPU card to only work at x8 speed, and the remaining two cards - to "be waiting for it") - there will not be any performance advantage with a 3rd card whatsoever. BTW, even if my mobo somehow did have a full speed, x16@x16 slot still available - the 2990WX CPU supports "only:)" so many PCIe lanes, and with the current configuration I have exactly 8 lanes left free, anyway...

On the other hand though - and this is where my dilemma starts - in several reviews I read the speed penalty of the x8 electrical connection versus that of a full x16 PCIe slot is actually surprisingly small - while adding a 3rd GPU card *does* bring substantial performance benefit per se....

Since the only way for me to learn which is closer to the actual facts would be buying and trying - as this would be rather expensive "test", before I do it may I please ask what others' opinion is on this is...Just which reasoning prevails under most (or average) circumstances?

Any comment welcome!

Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline

MishaEngel

  • Posts: 1432
  • Joined: Wed Aug 29, 2018 12:18 am
  • Real Name: Misha Engel

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Feb 25, 2019 1:51 pm

Piotr,

In order that makes most sense to me (assuming your main workflow is 4k).

1. Leave the GPU's
2. Buy more RAM
3. Sell the Titan's and buy 2 Radeon VII, because of the super high memory bandwidth of 1 TB/s which is very beneficial with noise reduction and effects.
4. Switch of to Linux to get even more out of your 2990wx.

When you solve one bottleneck you always create another one.
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Feb 25, 2019 3:02 pm

Thanks Misha - all good points, of course.

And what do you think of the issue as defined - i.e. with the two opposite arguments for and against adding (any - be it Titan or RTX) 3rd GPU card, if only 8 lanes of PCIe 3.0 are left available?

Cheers
Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline

MishaEngel

  • Posts: 1432
  • Joined: Wed Aug 29, 2018 12:18 am
  • Real Name: Misha Engel

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Feb 25, 2019 4:05 pm

Piotr Wozniacki wrote:Thanks Misha - all good points, of course.

And what do you think of the issue as defined - i.e. with the two opposite arguments for and against adding (any - be it Titan or RTX) 3rd GPU card, if only 8 lanes of PCIe 3.0 are left available?

Cheers
Piotr


Davinci Resolve always works with fp32 meaning 4 Byte(32bits) per pixel.
RAW pixels are 4 Bytes and RGB pixels are 3x4 Byte(RGB)=12 Bytes.

PCI Express 3.0's 8 GT/s bit rate effectively delivers 985 MB/s per lane.

UHD = 3840x2160x12 Bytes ~ 100 MByte/frame 985MB/s x 8lanes / 100 Mbyte/frame ~ 78 frames/s
4k = 4096x2160x12 Bytes ~ 106 MByte/frame 985MB/s x 8lanes / 106 Mbyte/frame ~ 74 frames/s
8k = 8192x4320x12 Bytes ~ 425 MByte/frame 985MB/s x 8lanes / 425 Mbyte/frame ~ 18 frames/s

With 8k frames you won't be able to do realtime editing with 1 card in a 8 lane slot.

Your GPU's will be limited by memory bandwidth when doing heavy noise reduction, not by fp32 performance.

The only sane reason to switch from 2 Titan Xp's to Radeon 7($700) or RTX TitanX($2500) is that you do a lot of 8k work with heavy effects and noise reduction (workflows where you need a lot of VRAM and a lot of memory bandwidth).
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostWed Feb 27, 2019 12:57 pm

Piotr Wozniacki wrote:- from a very knowledgeable person rising the sensible argument recently that - since the only PCIe slot left free on my mobo is an x8@x16 one (forcing the 3rd GPU card to only work at x8 speed, and the remaining two cards - to "be waiting for it") - there will not be any performance advantage with a 3rd card whatsoever. BTW, even if my mobo somehow did have a full speed, x16@x16 slot still available - the 2990WX CPU supports "only:)" so many PCIe lanes, and with the current configuration I have exactly 8 lanes left free, anyway...

On the other hand though - and this is where my dilemma starts - in several reviews I read the speed penalty of the x8 electrical connection versus that of a full x16 PCIe slot is actually surprisingly small - while adding a 3rd GPU card *does* bring substantial performance benefit per se....

The question I keep and keep asking is - I believe - a quite legit one, yet still no straight answer to it... Please don't get me wrong: I appreciate Misha's (and everybody else's) comments very much indeed; they are very insightful - but they do not answer my question which I quoted above from my own post. From this article by Puget: https://www.pugetsystems.com/labs/artic ... ance-1238/ - I extracted the following table, showing the GPU scalability aspect. Please note that even with just 4K and one of the common acquisition codecs (H.264 in the form of XAVC-L, e.g. from Sony FS7), a 29.97 fps clip would play at just 15 fps with just a single RTX 2080 (whose performance can be considered on par with my Titan Xp), at 20 fps with two such GPUs, and at the full 29.97 fps with three of them!

GPU scaling.jpg
GPU scaling.jpg (99.76 KiB) Viewed 7511 times

The very knowledgeable person I mentioned points out that if I put a 3rd card into the only remaining, free PCIe slot - which, as on most (if not all) current "state-of-the-art" motherboards, only has 8-lanes throughput (with both 16-lane slots already occupied with GPU cards 1 and 2) - all three GPUs will slow down from x16 to x8. This seems reasonable, and I don't dispute with this argument - but hasn't it been taken into account in Pugets' experiment? They've been using their 2990WX CPU on the MEG Creation mobo, which obviously also offers just two full speed PCIe slots, so the PCIe lanes availability is exactly the same as on my X399 Carbon AC (with the same total available number of PCIe 3.0 lanes, as we're talking the exact same CPU here). And yet - with 1/2/3 GPU cards, the fps they got is 15/20/29.97, respectively...

So is the benchmark totally wrong, or am I missing something obvious? Please enlighten me :)

Piotr

PS. Just to be clear: I don't use XAVC-L at 30p (just being the closest thing I found in the above table) but UHD or 4K, XAVC-I at 50p or 25p (all depending on the kind of project), mainly coming from my Sony FS7. This doesn't change my point, though. Which is that adding a 3rd GPU to my current configuration (either just a used Titan Xp, or - if I find a good buyer for my Titans - replacing them with 3x RTX 2080ti) should considerably improve playback of my heavily graded 4K@50p, XAVC-I clips... And with my multi-camera projects (4 being the usual number of angles) with classical music - it could translate to a much better A/V-synchro experience during editing, as well. Or is this all wrong?!!

PPS BTW, the Puget's table shows a very similar scalability of another format I often use in my shorts - namely, 4K CinemaDNG at 24 fps (25 in my case) which I record using my Shogun Inferno; only with 3 GPU cards will such clip play back full speed when graded heavily enough to only play back at 14 fps using a single, or at 17-18 fps - a double RTX 2080 (same with my Titan XPs)...
Last edited by Piotr Wozniacki on Fri Mar 01, 2019 9:15 am, edited 2 times in total.
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostThu Feb 28, 2019 10:24 am

OK guys - I can't help feeling more and more uncomfortable, not being able to make any of you interested in my dilemma about the pros & cons of adding a 3rd GPU and causing the many threads of mine turn into a monologue with myself, rather than a dialog with those more knowledgeable than I am myself :(

So I decided to put an end to this by - instead of discussing with myself whether the overall performance drop caused by all three GPU cards working in x8 speed will or will not be made for much more than enough with the gain from using 3 GPUs rather than 2 - ask you a simple question, and should I still be left without an answer - shut up on the subject forever :) The "simple question" I mean is this:

- is any of you guys actually running Resolve on the same (or similar enough, from the viewpoint of CPU power and motherboard architecture - i.e. only being capable of up to 2 GPU cards running at full x16 PCIe speed) using as many as three GPUs?


Getting your answer to the above question (preferably with some comments) would make me happy indeed, thank you in advance and Regards

Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Jack Fairley

  • Posts: 1863
  • Joined: Mon Oct 03, 2016 7:58 pm
  • Location: Los Angeles

Re: Anyone running Resolve using RTX 2080ti already?

PostThu Feb 28, 2019 9:29 pm

Without seeing your node trees, nobody can say for sure whether or not you need a third GPU, but if I had to bet, I'd bet that you don't. Enough looking at their benchmarks - when working with your current projects, with your current grading, do you get real time playback? If so, there's no need to upgrade.
Ryzen 5800X3D
32GB DDR4-3600
RTX 3090
DeckLink 4K Extreme 12G
Resolve Studio 17.4.1
Windows 11 Pro 21H2
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostFri Mar 01, 2019 5:55 am

Jack Fairley wrote:- when working with your current projects, with your current grading, do you get real time playback?


Without caching I don't.

Piotr
Last edited by Piotr Wozniacki on Fri Mar 01, 2019 10:20 am, edited 1 time in total.
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline

Gabriele Gelfo

  • Posts: 165
  • Joined: Mon May 13, 2013 4:51 pm

Re: Anyone running Resolve using RTX 2080ti already?

PostFri Mar 01, 2019 9:05 am

Resolve needs real-time feedback from pci-x
Resolve is not Vray RT or a gpu based 3d renderer, it doesn’t cache anything during the playback, is not optimized to work with multiple gpus except we render in the delivery page.
With 3d render engines we get linear speed improvements using 2,3,4 gpus.
On the contrary there are some 3d renderers using massively gpu+cpu, this is a bottleneck, we don’t get 2x, 3x, 4x the speed.
There is a lot of work to do on the Resolve gpu engine.
I suggest to stay with just one powerful gpu (2080Ti) and wait for the next DaVinci release to see if Blackmagic engineers have implemented Ai denoiser or realtime raytracing for Fusion.
Fusion will benefit of multiple RTX gpus.

Regards
iMac Retina 5K 27 Late 2014 - i7 4GHz - 32GB - R9 M295X 4GB
Mac Book Pro 15 Late 2018 - i9 2.9GHZ - 32GB - Vega 20 4GB
Mac Pro 7.1 - Xeon 12c - 96GB - Radeon Pro 580X - n°2 RTX2080Ti (bootcamp) - n°1 egpu GTX1080Ti
Da Vinci Resolve Studio
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostFri Mar 01, 2019 9:31 am

Gabriele Gelfo wrote:Resolve needs real-time feedback from pci-x
Resolve is not Vray RT or a gpu based 3d renderer, it doesn’t cache anything during the playback, is not optimized to work with multiple gpus except we render in the delivery page.
With 3d render engines we get linear speed improvements using 2,3,4 gpus.
On the contrary there are some 3d renderers using massively gpu+cpu, this is a bottleneck, we don’t get 2x, 3x, 4x the speed.
There is a lot of work to do on the Resolve gpu engine.
I suggest to stay with just one powerful gpu (2080Ti) and wait for the next DaVinci release to see if Blackmagic engineers have implemented Ai denoiser or realtime raytracing for Fusion.
Fusion will benefit of multiple RTX gpus.

Regards

Thanks Gabriele for your sensible input!

Of course I'm not expecting a linear improvement - but with my sort of projects, media format and average intensity of grading, every little counts... If - and this is a big IF - the Puget's benchmarks are correct, with a 3rd card I'd be able to cache a little less often and still play back my timelines full speed. And even when - due to clips requiring a heavier TNR, for instance - I still need caching, it would take less time with 3 than it does now with just 2 GPU cards...

From the same knowledgeable person I mentioned twice I know that yes - Resolve already does some AI, and will even more in future - so replacing my Pascal GPUs with Turing ones would be the best option. The problem however is that there is quite a lot of used Titan Xps on eBay and other auction services - and without selling my 2 Titan Xps first, I just can't afford replacing them with RTX 2080ti (even just 2, not to mention 3) :(

Cheers
Piotr

PS. Gabriele - only now did I notice in your sig that you're using as many as 4 GPUs in some of your machines; may I know on which CPU/motherboard platform, and how did you provide adequate number of PCIe lanes? Or are you also using eGPU through TB or alike?
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline

MishaEngel

  • Posts: 1432
  • Joined: Wed Aug 29, 2018 12:18 am
  • Real Name: Misha Engel

Re: Anyone running Resolve using RTX 2080ti already?

PostSat Mar 02, 2019 4:26 pm

This might be the better option when you want to upgrade to other GPU's https://forum.blackmagicdesign.com/viewtopic.php?f=21&t=86127
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Mar 04, 2019 5:30 am

MishaEngel wrote:This might be the better option when you want to upgrade to other GPU's https://forum.blackmagicdesign.com/viewtopic.php?f=21&t=86127


Somehow I'd prefer to stay with nVidia; have heard lots of bad things about AMD graphics drivers..

But I have another question in the meantime, as the free PCIe slot (true - a slower, x8 one) is tempting me especially with 1600W PSU...So Please bear it with me - I have one more question regarding installation of a 3rd GPU card. I have noticed that when I make Resolve use the UI card for computing, it's only slightly faster than with just one of my 2 GPUs used for computing - so a thought crossed my mind: perhaps the card which drives UI monitor cannot have full efficiency in computing as it's busy with driving the UI monitor? If it's so indeed, would I be better of if I added a 3rd card - but only use it for UI, thus leaving full power of the current 2x Titans for processing? I'd like to test it, but please give a hint:

- if I add an UI-only card to my x8 slot, will it also slow down the two computing GPUs in x16 slots? If not, this might be the way to go, but:

- with Titan Xp as my 2 computing cards, which model would you recommend as the 3rd GPU for UI only? I reckon it could be a slower and cheaper GPU card - but does it have to be at least the same architecture as my computing Titans (Pascal)? Do you think a GTX 1050 will do, or should I go with RTX 2060 so that - if I upgrade my computing Titan Xps to RTX 2080tis down the road - all 3 cards (including the UI one) have the same Turing architecture? Or perhaps the UI-only GPU has absolutely nothing in common with the computing GPUs, and so I could save lots of money on it?

Please give me a hint; thanks and Regards
Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Micha Clazing

  • Posts: 240
  • Joined: Sat Apr 01, 2017 3:26 pm

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Mar 04, 2019 11:38 am

It is quite telling that even without CUDA, the Radeon VII can match the 2080 Ti, a card which costs over $400 more. Also don't forget that with Vega you will never run out of VRAM.
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Mar 04, 2019 11:48 am

Micha Clazing wrote:It is quite telling that even without CUDA, the Radeon VII can match the 2080 Ti, a card which costs over $400 more. Also don't forget that with Vega you will never run out of VRAM.

Sure - but I remember there have been some functionalities in Resolve only accelerated with CUDA and not with OpenCL; has this changed by now?

Thx

Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Mar 04, 2019 12:36 pm

Piotr Wozniacki wrote:
MishaEngel wrote:This might be the better option when you want to upgrade to other GPU's https://forum.blackmagicdesign.com/viewtopic.php?f=21&t=86127


Somehow I'd prefer to stay with nVidia; have heard lots of bad things about AMD graphics drivers..

But I have another question in the meantime, as the free PCIe slot (true - a slower, x8 one) is tempting me especially with 1600W PSU...So Please bear it with me - I have one more question regarding installation of a 3rd GPU card. I have noticed that when I make Resolve use the UI card for computing, it's only slightly faster than with just one of my 2 GPUs used for computing - so a thought crossed my mind: perhaps the card which drives UI monitor cannot have full efficiency in computing as it's busy with driving the UI monitor? If it's so indeed, would I be better of if I added a 3rd card - but only use it for UI, thus leaving full power of the current 2x Titans for processing? I'd like to test it, but please give a hint:

- if I add an UI-only card to my x8 slot, will it also slow down the two computing GPUs in x16 slots? If not, this might be the way to go, but:

- with Titan Xp as my 2 computing cards, which model would you recommend as the 3rd GPU for UI only? I reckon it could be a slower and cheaper GPU card - but does it have to be at least the same architecture as my computing Titans (Pascal)? Do you think a GTX 1050 will do, or should I go with RTX 2060 so that - if I upgrade my computing Titan Xps to RTX 2080tis down the road - all 3 cards (including the UI one) have the same Turing architecture? Or perhaps the UI-only GPU has absolutely nothing in common with the computing GPUs, and so I could save lots of money on it?

Please give me a hint; thanks and Regards

Piotr

Since - apart from the RTX 2080ti - incoming cards from AMD is what people start recommending, the same dilemma as in the above quoted post of mine apply. So let me re-iterate:

Which scenario's performance will suffer less from installing a 3rd GPU in the only one left on my mobo, x8 slot (or to put it differently: which will bring better Resolve performance than I have now with 2 x16 processing GPUs of which one is also driving Resolve UI):

1. a 3rd computing GPU (RTX 2080ti or some future Radeon) in this x8 slot (with "use UI GPU for computing" turned ON), or

2. a much cheaper card (like say the GTX 1050 if I go with nVidia, or equivalent AMD) in this x8 slot, only used for the UI (and of course with "use UI GPU for computing" turned OFF, so that the only computing GPUs are the 2 processing GPUs, both in x16 slots)?

I do realize neither of the above scenarios is ideal, but two x16 slots is currently the maximum with mainstream, single-CPU motherboards - so there isn't much I can do about it...

Piotr

PS. This question is equally important with my current 2 Titans as it will be should I upgrade to RTX 2080ti or some future, equivalent Radeon for that matter...So if it's option 2 above which is likely to give me a better performance by relegating UI burden to a cheap, UI-only GPU card (even if in x8 slot only) - my Titan Xp upgrade path would get significantly cheaper, as I would only need to buy 2 and not 3 new, expensive computing GPUs!
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Micha Clazing

  • Posts: 240
  • Joined: Sat Apr 01, 2017 3:26 pm

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Mar 04, 2019 2:00 pm

Piotr Wozniacki wrote:Sure - but I remember there have been some functionalities in Resolve only accelerated with CUDA and not with OpenCL; has this changed by now?

As far as I can tell skimming the manual and configuration guide, the only Resolve feature that is exclusive to nvidia hardware right now is HEVC hardware decoding. With your overpowered CPU though, that processing power is better spent on the CPU than on the GPU anyway.

Piotr Wozniacki wrote:PS. This question is equally important with my current 2 Titans as it will be should I upgrade to RTX 2080ti or some future, equivalent Radeon for that matter...So if it's option 2 above which is likely to give me a better performance by relegating UI burden to a cheap, UI-only GPU card (even if in x8 slot only) - my Titan Xp upgrade path would get significantly cheaper, as I would only need to buy 2 and not 3 new, expensive computing GPUs!

I wouldn't recommend using different GPUs in the same system. Resolve will be bottlenecked by whichever card is the weakest (either in processing power or in VRAM capacity). Any upgrade path other than purchasing a third (second hand) Titan XP should involve replacing your two current Titans.
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Mar 04, 2019 5:26 pm

Micha Clazing wrote:I wouldn't recommend using different GPUs in the same system. Resolve will be bottlenecked by whichever card is the weakest (either in processing power or in VRAM capacity). Any upgrade path other than purchasing a third (second hand) Titan XP should involve replacing your two current Titans.


Misha - what you just said is the same what I'm hearing from the "very knowledgeable person" I've mentioned several times in this thread, so even though it's not what I'd like to hear - well, I must live with it, I guess :(

Before I give up though, let me make no mistake in the understanding you properly: that all processing ("compute") GPUs should be the same (or Resolve will only use the "lowest common denominator" of their specs or, if you like, the specs of the weakest "link in the chain") - this has always been quite obvious to me and it was not what I've been asking about. My real, most important question is whether also the UI-dedicated GPU (which is NOT used in computing) should at least be comparable with compute GPUs as far as the data throughput capabilities (thus - memory bandwidth, among other things). Which implies the all-important conclusion:

- if I put a cheap and seriously slower GPU card into the x8 slot and ask it to handle Resolve GUI, it can badly influence the performance of the computing GPU subsystem and thus the entire machine as a whole - even though both computing GPUs are fast, have plenty of VRAM, and the throughput of true x16 PCIe slot...

Is this what you mean? Thanks, bro

Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Jack Fairley

  • Posts: 1863
  • Joined: Mon Oct 03, 2016 7:58 pm
  • Location: Los Angeles

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Mar 04, 2019 6:22 pm

You should not use a display-only GPU anymore, since Resolve 14.
Ryzen 5800X3D
32GB DDR4-3600
RTX 3090
DeckLink 4K Extreme 12G
Resolve Studio 17.4.1
Windows 11 Pro 21H2
Offline
User avatar

Rakesh Malik

  • Posts: 3236
  • Joined: Fri Dec 07, 2012 1:01 am
  • Location: Vancouver, BC

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Mar 04, 2019 7:12 pm

Jack Fairley wrote:You should not use a display-only GPU anymore, since Resolve 14.


Unless you have a truly oddball setup like the one I'm working with right now... I have a laptop with an 8GB 1070 Max-Q built in, but an external Radeon VII via TB3. So I have Resolve running on the internal GPU for GUI, and configured it to use the external for everything else, which allows me to dedicate the 16GB frame buffer to compute, which is nice -- it's able to handle Fusion composites with 2 and 3 8K raw clips at full resolution de-Bayer without flaw -- not quickly mind you, so I keep the de-Bayer resolution pretty low when working, but it's stable, which isn't possible on the internal due to running out of GPU memory... so, I'm getting some benefit from it.

Hybridizing the GPUs like that wasn't the plan; it just ended up working that way because Dell's Graphics Amplifier doesn't live up to the Alienware brand's implied reputation... kind of a disappointment.

So in theory you could use a lower end GPU as your display GPU and disable the "use display GPU for compute" option so that it doesn't gate the compute GPUs, but if you're running on a desktop like what I plan on building (hopefully this summer) you'd probably get less value from it than I am.
Rakesh Malik
Cinematographer, photographer, adventurer, martial artist
http://WinterLight.studio
System:
Asus Flow X13, Octacore Zen3/32GB + XG Mobile nVidia RTX 3080/16GB
Apple M1 Mini/16GB
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Mar 04, 2019 7:48 pm

Rakesh Malik wrote:So in theory you could use a lower end GPU as your display GPU and disable the "use display GPU for compute" option so that it doesn't gate the compute GPUs, but if you're running on a desktop like what I plan on building (hopefully this summer) you'd probably get less value from it than I am.

So according to you, Rakaesh, an UI-only GPU - while freeing one of computing GPUs from the UI-driving burden - will not "gate" those computing GPUs by it being slower, and operating at only x8 PCIe speed? (this is how I understand the verb "gate" in your context).

If so, this would be extremely important and installing such a GPU in x8 free slot would make sense, after all. However, the very knowledgeable person I mentioned states otherwise (i.e. that even an UI-only GPU will make those computing ones "wait" for it because of the speed difference of x8 vs v16 slots. If this is the case, adding a 3rd Titan Xp and using the UI GPU for computing might be more beneficial, if only the drop in speed (due to all 3 compute GPUs effectively working at x8) has less performance impact than increasing the number of them from 2 to 3, i.e. by 50%. And it looks quite possible, considering the Puget's benchmark overall speed scalability with 1, 2 and 3 GPUs like 15/20/29.97, respectively... But this can only be proven right or wrong with my own tests, I'm afraid.

Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Rakesh Malik

  • Posts: 3236
  • Joined: Fri Dec 07, 2012 1:01 am
  • Location: Vancouver, BC

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Mar 04, 2019 7:55 pm

Piotr Wozniacki wrote:
Rakesh Malik wrote:So in theory you could use a lower end GPU as your display GPU and disable the "use display GPU for compute" option so that it doesn't gate the compute GPUs, but if you're running on a desktop like what I plan on building (hopefully this summer) you'd probably get less value from it than I am.

So according to you, Rakaesh, an UI-only GPU - while freeing one of computing GPUs from the UI-driving burden - will not "gate" those computing GPUs by it being slower, and operating at only x8 PCIe speed? (this is how I understand the verb "gate" in your context).


As far as I understand it, that's the case; the symmetry requirement doesn't include a UI-only GPU as long as it's not enabled for compute.

I *believe* that's the case because I don't think that my system would be working at all otherwise, since I can only choose the Radeon VII if I select OpenCL, and only the 1070 if I select CUDA, but I haven't done particularly thorough testing yet.
Rakesh Malik
Cinematographer, photographer, adventurer, martial artist
http://WinterLight.studio
System:
Asus Flow X13, Octacore Zen3/32GB + XG Mobile nVidia RTX 3080/16GB
Apple M1 Mini/16GB
Offline
User avatar

Micha Clazing

  • Posts: 240
  • Joined: Sat Apr 01, 2017 3:26 pm

Re: Anyone running Resolve using RTX 2080ti already?

PostMon Mar 04, 2019 9:17 pm

Piotr Wozniacki wrote:- if I put a cheap and seriously slower GPU card into the x8 slot and ask it to handle Resolve GUI, it can badly influence the performance of the computing GPU subsystem and thus the entire machine as a whole - even though both computing GPUs are fast, have plenty of VRAM, and the throughput of true x16 PCIe slot...

Is this what you mean?

No, that's not what I mean. I was assuming you wanted to use the third GPU for compute. However, I severely doubt there is any point in using a separate GPU for the UI for two reasons:
1. Any GPU, even Intel HD Graphics, can draw a UI in its sleep. It's not a great burden to be freeing your compute GPUs from.
2. The UI will need to render the viewer and scopes, and these have to be generated from the computed output. That means that instead of just rendering a framebuffer that is already contained in the compute GPU's memory, consuming a tiny amount of pixel fill rate (compared to the rest of what Resolve is asking of the GPU), that framebuffer has to be copied from the compute GPU to the GPU that is tasked with rendering the UI, taking up valuable bus bandwidth.

If you want to know how much (if any) improvement you get from "relieving" the compute GPU from rendering the UI, you can run two benchmarks:
- One with just one of your Titans doing both compute and UI, and the other disabled,
- One with one of your Titans doing UI, and the other compute.

I predict the results will be within a margin of error of each other.
Offline

MishaEngel

  • Posts: 1432
  • Joined: Wed Aug 29, 2018 12:18 am
  • Real Name: Misha Engel

Re: Anyone running Resolve using RTX 2080ti already?

PostTue Mar 05, 2019 1:56 pm

In this test with 1 GPU vs. 3 GPU's one of the GPU's is always placed in a 8x slot, because there is no other way to do it with these platforms.

https://www.pugetsystems.com/labs/articles/DaVinci-Resolve-15-AMD-Threadripper-2990WX-2950X-Performance-1219/

When you can get your hands on a good priced GPU I would just go for it when you have an open 8x PCIe slot. It will be faster but not a lot. With just playback you won't gain a lot of speed, the extra horsepower is mainly used with heavy effects and NR(heavy load on the GPU and the VRAM, not much data travel between the CPU and GPU's over the PCIe-lanes).
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostTue Mar 05, 2019 2:16 pm

As you may remember, basing on what the very knowledgeable person is telling me, I assumed that if even only one computing GPU works in an x8 PCI-e slot - it will make all of them effectively work as if they were installed in such a sub-optimal PCIe slot by making the other (usually 2, installed in the 2 x16 slots most mainstream motherboards offer today) to "wait" for the 3rd one, installed in the x8 slot... I was taking it for granted, but today I came across this article:

https://www.pugetsystems.com/labs/artic ... s-x16-851/

- and lo and behold, it doesn't seem so! If this assumption was unconditionally true, we could only had 2 distinctive GPU performance levels in Resolve:

- higher (when all compute GPUs are in full x16 slots), or
- lower (when at least a single one of them is working in an x8 slot)

But the following picture from the article mentioned clearly witnesses the third, medium GPU performance level obtained with such a "mixed" configuration (one GPU in x16, the other - in an x8 slot):

x16 vs x8.jpg
x16 vs x8.jpg (39.5 KiB) Viewed 7132 times

The article is from 2016, but if nothing has changed in Resolve in this area there is a big chance that if I add a 3rd Titan Xp and install it in the only free PCIe slot I have which is an x8 slot - with the current 2 Titan Xps working in x16 slots, I will get such a mixed performance level. Which means that if performance gains from increasing the total compute GPUs number by 50% are high enough, the overall performance level will increase, after all :)

Enough talking - I'm going to buy one of the used Titan Xp cards listed in the local auction portal, and will report my results here. So far - with 2x Titans and the Candle Test modified so that the TL play back speed is at [EDIT]250 fps, I'm getting stable 147 fps with one TNR node, full 250 fps with 9 Blur nodes[/EDIT]... I wonder what these numbers will be with 3x Titan Xps as compute GPUs!

Stay tuned,

Piotr
Last edited by Piotr Wozniacki on Wed Mar 06, 2019 6:22 am, edited 2 times in total.
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Offline
User avatar

Piotr Wozniacki

  • Posts: 1225
  • Joined: Tue Aug 02, 2016 12:17 pm
  • Location: Poland

Re: Anyone running Resolve using RTX 2080ti already?

PostTue Mar 05, 2019 2:25 pm

MishaEngel wrote:In this test with 1 GPU vs. 3 GPU's one of the GPU's is always placed in a 8x slot, because there is no other way to do it with these platforms.

https://www.pugetsystems.com/labs/articles/DaVinci-Resolve-15-AMD-Threadripper-2990WX-2950X-Performance-1219/

When you can get your hands on a good priced GPU I would just go for it when you have an open 8x PCIe slot. It will be faster but not a lot. With just playback you won't gain a lot of speed, the extra horsepower is mainly used with heavy effects and NR(heavy load on the GPU and the VRAM, not much data travel between the CPU and GPU's over the PCIe-lanes).

Exactly, Misha - for the price of a used Titan Xp, I'm hoping to get a performance level of 2x RTX 2080ti...

Piotr
AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP3200 | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)
Previous

Return to DaVinci Resolve

Who is online

Users browsing this forum: billbyrnes, Charles Bennett, Ewald Hentze, Lexvid and 147 guests