Page 1 of 1

NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Wed Sep 12, 2018 8:08 pm
by dariobigi
I am aware that the config guide recommends the NVIDIA TITAN Xp. (Pascal architecture) (DR 15.0.1) The Titan V is out and now the RTX 2080 with Turing architecture is going to offered soon.

My question to BMD / Peter C is:
Can/Does/Wiil DR use the new Turning based cards? No/No/Yes? If so when or clarify please.

I am currently using two GTX Titan X cards (Maxwell) and I don't see them being used to their full potential in the performance monitor (Win 10). If DR is not using the full potential of my current cards, is there a benefit in upgrading? (My system specs are in my signature.)

Thanks for all that you do. I've been with Resolve since v8 and your crew is Killing It. (As the kids say.)
Dario

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Wed Sep 12, 2018 8:32 pm
by Andrew Kolakowski
dariobigi wrote:I am currently using two GTX Titan X cards (Maxwell) and I don't see them being used to their full potential in the performance monitor (Win 10). If DR is not using the full potential of my current cards, is there a benefit in upgrading? (My system specs are in my signature.)


Can't you really answer this question yourself?

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Wed Sep 12, 2018 9:26 pm
by MishaEngel
dariobigi wrote:I am currently using two GTX Titan X cards (Maxwell) and I don't see them being used to their full potential in the performance monitor (Win 10). If DR is not using the full potential of my current cards, is there a benefit in upgrading? (My system specs are in my signature.)

Thanks for all that you do. I've been with Resolve since v8 and your crew is Killing It. (As the kids say.)
Dario


GTX Titan X(Maxwell) has a compute power of 6.144 Tflops at base clock and 6.605 Tflops at boost clock (fp32) and a memory bandwidth of 336 GByte/s.

The memory bandwidth of these GPU's is slowing things down.

RTX2080ti(FE) has a compute power of 11.75 Tflops at base clock and 14.231 Tflops at boost clock (fp32) and a memory bandwidth of 616 GByte/s.

It also has some special tricks up it's sleeve:

- Turing architecture adds new execution unit (INT32). This unit will enable Turing GPUs to execute floating point and non-floating point processes in parallel. NVIDIA claims that this should theoretically provide 36% additional throughout for floating point operations.

- Turing architecture brings new lossless compression techniques. NVIDIA claims that their further improvements to ‘state of the art’ Pascal algorithms have provided (in NVIDIA’s own words) ‘50% increase in effective bandwidth on Turing compared to Pascal’.

Meaning these cards are very fast and when you want to know how fast they are in resolve wait for the test from puget systems https://www.pugetsystems.com/all_news.php.

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Wed Sep 12, 2018 10:16 pm
by Andrew Kolakowski
And what all of this will give for Dario if:

dariobigi wrote:I am currently using two GTX Titan X cards (Maxwell) and I don't see them being used to their full potential in the performance monitor (Win 10). If DR is not using the full potential of my current cards, is there a benefit in upgrading? (My system specs are in my signature.)


What is the point for update now? Waste of money? Not better to wait until cards will get cheaper and go through all initial drivers problems etc. ? Those cards won't make Dario's video any better :D If he think exports re to slow then this is the only argument.

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Wed Sep 12, 2018 11:05 pm
by MishaEngel
The memory bandwidth of the GTX Titan X(Maxwell) is the bottleneck

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Fri Sep 14, 2018 2:23 am
by dariobigi
I maybe able to get two used Titan Pascals cheap. To your knowledge, is there less or no bottleneck with them?


Sent from my iPhone using Tapatalk

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Fri Sep 14, 2018 2:40 am
by Jack Fairley
dariobigi wrote:I maybe able to get two used Titan Pascals cheap. To your knowledge, is there less or no bottleneck with them?


Sent from my iPhone using Tapatalk

Memory on Titan X Pascal is much faster than Titan X Maxwell. Why are you thinking about upgrading?

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Fri Sep 14, 2018 2:48 am
by dariobigi
I bought the Titan Maxwell’s when they first became available. Some of my playback cannot be real time (depending on how much work I’m doing in my node tree). In noticing that and that the GPUs are not fully being utilized on renders, I was wondering if there’s any benefit to an upgrade. I’m keeping everything self contained in one box so it can be portable. (A rare but a necessary option.) So my only option would be to upgrade the cards. They already paid for them selves, so I was just exploring the idea, now that the 2080s are coming out. I was more so wondering if the Turing architecture was supported and utilized by resolve but it sounds like that wouldn’t be an issue.



Sent from my iPhone using Tapatalk

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Fri Sep 14, 2018 9:57 am
by Andrew Kolakowski
2080 definitely should be way faster than your old Titan.

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Fri Sep 14, 2018 10:47 am
by Martin Schitter
Andrew Kolakowski wrote:2080 definitely should be way faster than your old Titan.


one of the really important advantages of this new generation of GTX cards is it's much(!) better fp16 performance.
this kind of operations were in fact horrible slow on the older consumer cards, but they make a lot of sense to realize some operations in a significant more memory and communication bandwidth saving manner. AFAIK resolve doesn't utilize this kind optimizations until now, but they are already present in competitive products, which always suggested to use quadro cards because of this limitations concerning the nvidia consumer GPUs. but i'm rather sure, that resolve will also take advantage of this possibilities sooner or later, when 20XX cards become more widespread used.

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Fri Sep 14, 2018 11:08 pm
by MishaEngel
dariobigi wrote:I maybe able to get two used Titan Pascals cheap. To your knowledge, is there less or no bottleneck with them?


Sent from my iPhone using Tapatalk


There are 2 types of titan X in the pascal range

old one:

Release Date: 2nd August, 2016
Launch Price: $1200 USD
Board Model: NVIDIA PG611
GPU Model: 16nm GP102-400
Cores : TMUs : ROPs : 3584 : 224 : 96
Base Clock: 1431 MHz
Boost Clock: 1531 MHz
Memory Clock (Effective): 1251 (10008) MHz
Computing Power (FP32): 10,257 GFLOPS
Memory Size: 12288 MB GDDR5X
Memory Bus Width: 384-bit
Memory Bandwidth: 480.38 GB/s

newer one:

Release Date: 6th April, 2017
Launch Price: $1200 USD
Board Model: NVIDIA PG611
GPU Model: 16nm GP102-450
Cores : TMUs : ROPs : 3840 : 240 : 96
Base Clock: 1480 MHz
Boost Clock: 1582 MHz
Memory Clock (Effective): 1425 (11400) MHz
Computing Power (FP32): 11,366 GFLOPS
Memory Size: 12288 MB GDDR5X
Memory Bus Width: 384-bit
Memory Bandwidth: 547.2 GB/s

Second hand the old one shouldn't cost you more than $ 300 and the newer one max $ 400.

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 1:36 am
by Martin Schitter
here you'll find a few additional values for your comparison:
https://www.microway.com/knowledge-cent ... esla-gpus/

i just want to quote/emphazize one entry:

FP16 16-bit (Half Precision) Floating Point Calculations:
GeForce GTX 1080 Ti ................. less than 0.177 TFLOPS
GeForce RTX 2080 Ti ................. 110 TFLOPS

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 1:49 am
by dariobigi
Andrew Kolakowski wrote:
Can't you really answer this question yourself?

Maybe I didn’t phrase the question correctly but if it’s not maxing out the Maxwell card why would I assume it’s going to max out the new 2080 cards?
Is Resolve using or going to use the new architecture?
(Saw this reverse query late.)
Thanks to everyone providing great info that will hopefully be helpful to others as well.


Sent from my iPhone using Tapatalk

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 3:50 am
by Dan Sherman
dariobigi wrote:Maybe I didn’t phrase the question correctly but if it’s not maxing out the Maxwell card why would I assume it’s going to max out the new 2080 cards?


If all you're looking at is the core utilization, then that's not a useful spec by its self. Depending on what you are doing, you could be limited by power, memory, bus, or combinations of things.

Install this and take a deeper look at your machine.
https://www.hwinfo.com/download.php

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 12:04 pm
by MishaEngel
Martin Schitter wrote:here you'll find a few additional values for your comparison:
https://www.microway.com/knowledge-cent ... esla-gpus/

i just want to quote/emphazize one entry:

FP16 16-bit (Half Precision) Floating Point Calculations:
GeForce GTX 1080 Ti ................. less than 0.177 TFLOPS
GeForce RTX 2080 Ti ................. 110 TFLOPS



Upto now Davinci Resolve is FP32.

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 12:17 pm
by Andrew Kolakowski
dariobigi wrote:
Andrew Kolakowski wrote:
Can't you really answer this question yourself?

Maybe I didn’t phrase the question correctly but if it’s not maxing out the Maxwell card why would I assume it’s going to max out the new 2080 cards?
Is Resolve using or going to use the new architecture?
(Saw this reverse query late.)
Thanks to everyone providing great info that will hopefully be helpful to others as well.


Sent from my iPhone using Tapatalk


Resolve is not going to use any new tech in 2080. It will work way faster due to many CUDA cores and way better memory speed though. Maybe later BM will start using some new features in 2080 cards, but this is big change to core code.
If you can work fine with current cards then I don't see the point with buying new one now. Wait a bit.

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 12:18 pm
by Andrew Kolakowski
MishaEngel wrote:
Martin Schitter wrote:here you'll find a few additional values for your comparison:
https://www.microway.com/knowledge-cent ... esla-gpus/

i just want to quote/emphazize one entry:

FP16 16-bit (Half Precision) Floating Point Calculations:
GeForce GTX 1080 Ti ................. less than 0.177 TFLOPS
GeForce RTX 2080 Ti ................. 110 TFLOPS



Upto now Davinci Resolve is FP32.


Assimilate is going to love this as I think they do use FP16.

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 1:59 pm
by Carsten Sellberg
Andrew Kolakowski wrote: Assimilate is going to love this as I think they do use FP16.


Hi.

Tom Petersen from nVidea confirm in this YouTube at 12:57 that the Tensor cores are exposed to CUDA:



So lets us wait and see who do it first.

Regards Carsten.

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 2:16 pm
by Andrew Kolakowski
Tensor cores can be used to do new advanced (AI powered) debayering if I understand it correctly, but this is a whole new core code to be written.

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 2:38 pm
by MishaEngel
Andrew Kolakowski wrote:Assimilate is going to love this as I think they do use FP16.


Then I wish them a lot of luck with the 16 bits colour precision


FP16
Image


FP32
Image

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 5:27 pm
by David Cherniack
A few things would be helpful to know if anyone does at this point:

Will the stackable memory of the Quadro Turring series also be a feature of the RTX series?

Will Resolve be able to leverage the stackable memory total cfor it's total GPU processing?

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 5:30 pm
by Cary Knoop
That I think one of the key questions, will Resolve support NVLink.

NVLink makes use of the total memory if you have multiple cards. The new RTX 2080 (TI) supports NVLink.

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 7:24 pm
by MishaEngel
Cary Knoop wrote:That I think one of the key questions, will Resolve support NVLink.

NVLink makes use of the total memory if you have multiple cards. The new RTX 2080 (TI) supports NVLink.



The memory bandwidth goes down from 616 GByte/s to 100 GByte/s so that won't help much.
(Just set the memory bandwidth from you current GPU back to 100 GByte/s and you will see the results).

Same is true for HBCC from AMD, here it just helps to render bigger models without running out of memory. Up to now, 16 GByte seems enough for 8k.R3D at 5:1 compression, people with only 11..12 GByte often run out of memory in Resolve. Those new quadro's have 24 and even 48 GByte for a reason.

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 7:56 pm
by David Cherniack
MishaEngel wrote:Those new quadro's have 24 and even 48 GByte for a reason.



Time to look wistfully at the piggybank....

Re: NVIDIA - RTX 2080 vs GTX 1080

PostPosted: Sat Sep 15, 2018 8:01 pm
by MishaEngel
Quadro RTX 8000 with 48GB memory: $10,000 estimated street price
Quadro RTX 6000 with 24GB memory: $6,300 ESP