Hi.
I think the interesting part in the above link is page 4 called 'Turing Improves Performance in Today’s Games'
https://www.tomshardware.com/reviews/nv ... 801-4.htmlHere is it explained, that it has support for simultaneous execution of FP32 arithmetic instructions and INT32 operations. 'Nvidia claims that the effect of its redesigned math pipelines and memory architecture is a 50% performance uplift per CUDA core.' And a 27%-higher data rate/peak bandwidth from going from GDDR5X to GDDR6 memory. But I start to wonder how big an improvement we will see in Resolve?
And Page 11 'Display Outputs and the Video Controller'
https://www.tomshardware.com/reviews/nv ... 01-11.html'Video Acceleration: Encode And Decode Improvements.'
Where nVidea claims some quality improvements for the Hardware encoding/decoding in new Turing chips, compared to the previous Pascal chips. 'And the HEVC 4:4:4 10/12-bit HDR is listed as a new feature for Turing.'
And then a new link called 'The NVIDIA Turing GPU Architecture Deep Dive' from:
https://www.anandtech.com/show/13282/nv ... -deep-diveHere I find some interesting thing on this page 'Feeding the Beast (2018): GDDR6 & Memory Compression'
https://www.anandtech.com/show/13282/nv ... eep-dive/8First is there a explanation of the increased memory bandwidth of Turning.
Then in the second half is 'Turing: Memory Compression Iterated'
It explain how memory compression works by looking into the differences between neighboring pixels – their deltas. But again I wonder how big an improvement we will see in Resolve?
I really hope, we soon will see some benchmarks of Resolve for the nVidea RTX 2080 Ti.
Regards Carsten.