Hopefully good CUDA Improvement for nVidea RTX 2080/Ti.

Get answers to your questions about color grading, editing and finishing with DaVinci Resolve.
  • Author
  • Message
Offline

Carsten Sellberg

  • Posts: 1471
  • Joined: Fri Jun 16, 2017 9:13 am

Hopefully good CUDA Improvement for nVidea RTX 2080/Ti.

PostSat Sep 22, 2018 7:40 am

Hi.

When I try to look at the expected CUDA calculation performance for nVidea RTX 2080/Ti does it look good, when running old CUDA programs on one of the new RTX 2080/Ti Graphics Cards.
I expect it will take up to a week, before we got the real numbers for Windows. But in the meantime can we look at Linux:

'NVIDIA GeForce RTX 2080 Ti Shows Very Strong Compute Performance Potential' in Linux:

https://www.phoronix.com/scan.php?page= ... pute&num=1

A remark from the last page. Quote: 'The RTX 2080 Ti was consuming about 20 Watts more on average than the GTX 1080 Ti'


And I can't find any CUDA calculation Benchmarks for the RTX 2080. So here I have to use OpenCL Benchmarks instead:

GeForce RTX 2080 276465

GeForce GTX 1080 Ti 214130

From https://browser.geekbench.com/opencl-benchmarks

That is an 29% Improvement. Hope for the same Improvement for CUDA calculations.

Regards Carsten.
URSA Mini 4.6K
Offline

MishaEngel

  • Posts: 1432
  • Joined: Wed Aug 29, 2018 12:18 am
  • Real Name: Misha Engel

Re: Hopefully good CUDA Improvement for nVidea RTX 2080/Ti.

PostSat Sep 22, 2018 1:20 pm

When software is optimized for both CUDA and OpenCL it shouldn't make a difference. OpenCL is often seen as more complex to program. Nothing has changed it's still clockspeed in GHz x cores x 2 = GFlops (fp32). Example RTX2080ti at base clock 1.35 GHz x 4352 cores x 2 = 11,750 GFlops or 11.75 TFlops (fp32). The biggest advantage of the RTX2080ti over the GTX1080ti is the memory bandwidth.

Return to DaVinci Resolve

Who is online

Users browsing this forum: 4EvrYng, Baidu [Spider], Bing [Bot], MSNbot Media, Shrinivas Ramani and 231 guests