rsf123 wrote:the chatter on social media is that these chips approach the GPU power of a RTX 3070 -- but Apple's graph, shown in the presentation, is for mobile versions of those GPUs, not the desktop versions.
Need to be careful with these generalizations. Is the "chatter" evaluating the GPU on the basis of 3D rendering (as in Blender), 3D gaming, bitcoin mining, or video processing (as would be the case for Resolve/Fusion)?
These have severely distinct requirements and which GPU performs better may be completely different depending on the task.
For example, mining cryptocurrency generally starts tasks going on the GPU then has relatively little communication back to the CPU. In this case the raw performance of the GPU is a deciding factor, having lots of GPU memory probably won't matter too much, and the speed of transferring data between the GPU and the CPU is largely a moot point. I would expect the RTX chips to have the advantage for this purpose.
For video processing, each frame may be completely different, so to efficiently process the video would mean transferring large amounts of data repeatedly from the CPU to the GPU and back, making the data transfer pipeline between the two critical to the performance of the task. I would expect Apple's solution with a relatively large amount of shared memory to give it a massive boost for this task, and it is likely that the M1 Max in particular will outperform the (mobile) RTX by a handy margin for this purpose.
3D rendering / gaming is likely to fall somewhere in between and which one has the advantage will depend largely on the nature of the scene being rendered.
Of course, we don't have all the details yet, so this is largely speculation until we get some real-world data in, but this would be my guess based on what little we know so far.