Page 1 of 1

Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Fri Jun 18, 2021 3:22 pm
by rsf123
I saw this graph on a MacRumors discussion thread:

GPU.png
GPU.png (214.57 KiB) Viewed 3038 times


https://forums.macrumors.com/threads/apple-m1-cpu-gpu-speed-is-very-disappointing.2293062/post-30019504

Since my current M1 16GB has been (utterly superb for general computing but) disappointing in terms of GPU performance for Fusion, I had been thinking of hanging out to see what Apple's M1X and M2 might offer later this year hopefully. But if the above chart is accurate, it might be perhaps 3-4 years before Apple comes up with a GPU that can match a 3080.

My predicament is that by the 2nd half of this year, I need to start editing a project where I need at least a 3070 or 3080 for Fusion, and it doesn't look like Apple Silicon's next iteration can come close to these.

Having said that, I am seeing a few posts on this BMD forum and on Reddit where people using a 3060, 3070 and 3080 seem to be saying that even those GPU's don't seem to offer significant advancement over lesser powered GPUs, for example:

https://www.reddit.com/r/blackmagicdesign/comments/lhw6r1/what_kind_of_performance_should_i_be_expecting/h0swxyh?utm_source=share&utm_medium=web2x&context=3

https://forum.blackmagicdesign.com/viewtopic.php?f=22&t=142794#p767243

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Fri Jun 18, 2021 4:34 pm
by Uli Plank
Are you sure it's the GPU? Might rather be the limited RAM that's making Fusion slow on M1.
How much you gain from a 3070/80 depends very much on the task. Temporal processes like NR or optical flow profit a lot, stabilization much less.

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Fri Jun 18, 2021 5:10 pm
by bentheanimator
GPU performance is both really important and not important in Fusion. You need as much VRAM as you can get because it's crap at GPU RAM management. If it fills up you might as well restart Resolve and start from an empty GPU Ram cache. The speed of the GPU like a RTX3080 is great but is only used for OpenGL type performance so it's not like it's maxing out on processing the frame most of the time. On a PC, you can use After Burner and Rain Meter to monitor what your GPU is doing in regards to RAM and CUDA/OpenCL processing load. I run 3 1080ti and run up against the 11Gb of VRAM way more often than slowing down of processing. It does happen but if you do the Saver/Loader trick for caching then you tend to make something, cache it, and move on.

Working on a laptop for Resolve is a very constraining situation. I constantly run up against RAM limits using 64Gb of RAM so I could only imagine trying to work on 16Gb. Maybe look at building a machine with a 3090 and 128Gb of RAM to do your heavy lifting?

Got tohttps://www.pugetsystems.com/labs/articles/DaVinci-Resolve-Studio---NVIDIA-GeForce-RTX-3080-Ti-Performance-2158/ and check out the performance benchmarks to get a good look under the hood.

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Fri Jun 18, 2021 5:10 pm
by rsf123
Uli Plank wrote:Are you sure it's the GPU? Might rather be the limited RAM that's making Fusion slow on M1.
How much you gain from a 3070/80 depends very much on the task. Temporal processes like NR or optical flow profit a lot, stabilization much less.


My main task in Fusion is creating 2D and 3D moving shapes to create explainer videos. Some of the animations involve hundreds of moving shapes, hence, taxing on the GPU.

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Fri Jun 18, 2021 8:57 pm
by RCModelReviews
Maybe Fusion isn't the best tool for this job.

Have you checked out the available packages specifically designed for animation? Or even Blender?

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Fri Jun 18, 2021 9:16 pm
by nukunukoo
First of all, the Leela LC0 is not M1-native. Next is the fact that a native M1 ML app fragment will be Neural Engine-bound, not GPU-bound, in the case with Tensor, would make the routine literally hundreds of times faster when optimised. Fortunately, more libraries, like Tensor, Blender, Python, etc. Are being developed specifically for the M1.

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Fri Jun 18, 2021 9:39 pm
by Hendrik Proosa
nukunukoo wrote:First of all, the Leela LC0 is not M1-native. Next is the fact that a native M1 ML app fragment will be Neural Engine-bound, not GPU-bound, in the case with Tensor, would make the routine literally hundreds of times faster when optimised. Fortunately, more libraries, like Tensor, Blender, Python, etc. Are being developed specifically for the M1.

Say what now? It made no sense to me. Tensor cores are nor much more than circuitry for doing matrix mults. It does not scale general purpose compute much, if at all, because you just can’t optimize everything into a matrix multiplication.

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Fri Jun 18, 2021 10:47 pm
by Jason Conrad
I've not heard of Leela before, but it appears to be for chess. Is there any reason to believe its a good metric for running Resolve?

BMD and Apple seem to have a comfortable developer relationship, with Resolve often showcased in new Apple hardware releases. But both companies are very tight-lipped about future releases, and their respective roadmaps are anyone's guess.

I'd say if you know you need beefier hardware than Apple currently offers, don't gamble on what they might release later this year, especially because it will take the software some time to catch up to any significant hardware changes.

I've been a lifelong Apple user, but the anticompetitive way they lock in both users and developers these days, and their abandonment of "Pro" users is the same hubris that knocked Microsoft off its pedestal.

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Sat Jun 19, 2021 2:46 am
by Nick2021
3080 MSRP isn't much less than the M1. It's unrealistic to think Apple is going to put the equivalent into a machine.

It's likely pushing it to expect even a 3060 type GPU at that price point. Go look what you get in a $700 laptop.

The other issue is the classic HW vs SW. Nvidia has been tuning it's drivers etc for years. Apple can throw money at the problem but the much smaller user base makes it harder.

Windows is a much larger market. Nvidia has most of the discrete GPU market.

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Sat Jun 19, 2021 4:32 am
by rsf123
bentheanimator wrote: The speed of the GPU like a RTX3080 is great but is only used for OpenGL type performance...


What aspects of Fusion rely heavily on OpenGL?

Much of my tasks are 2D shapes that are animated by keyframes. Hence, I'd like to clarify if high end RTX GPU's would make a big difference in that area.

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Sat Jun 19, 2021 7:15 am
by Hendrik Proosa
rsf123 wrote:What aspects of Fusion rely heavily on OpenGL?

Don’t take my word on it but I’d say whole 3d architecture of Fusion is built on opengl, not just viewport but renderer also. 2d transforms and related filtering might be too because I don’t see how it would otherwise work the way it does with transform concatenation through merge node. My impression is that transformations are actually operating on opengl textures thrown around until at some point rasterization is needed.

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Sun Jun 20, 2021 8:17 am
by rsf123
I came across this recent video from Max Tech dated 18 June 2021 which pointed to a mobile Nividia RTX 3080 being slower than the M1.



The video refers to this Apple video from WWDC21

https://developers.apple.com/videos/play/wwdc2021/10153/

Regarding the argument, that is making the rounds, that 16GB of Apple Silicon is equivalent to 32GB of PC RAM, so far I had not seen anyone provide numerical data to mount an argument, but in this video the Apple data says the bandwidth for processing a graphic that requires 2.16GB on a PC can be achieved by 810MB on the M1, That's a ratio of 2.66 - which if applied to 16GB of RAM, would give 43GB of PC RAM. I'm not saying this is a precise way of arguing, but all I'm saying is that it is the first time I've seen a piece of math data used to compare RAM usage on an M1 versus PC.


Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Sun Jun 20, 2021 8:23 am
by Uli Plank
In laptops many high-performance GPUs get throttled due to heat and/or for battery life.

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Sun Jun 20, 2021 9:53 am
by Hendrik Proosa
rsf123 wrote:Apple data says the bandwidth for processing a graphic that requires 2.16GB on a PC can be achieved by 810MB on the M1, That's a ratio of 2.66 - which if applied to 16GB of RAM, would give 43GB of PC RAM. I'm not saying this is a precise way of arguing, but all I'm saying is that it is the first time I've seen a piece of math data used to compare RAM usage on an M1 versus PC.

Does apple use just 3 bits for every 8 bits on pc? Too bad Resolve only processes at 12bit precision then instead of 32 bit full floats. This kind of misleading numbers serve pretty much just sales purpose.

Bandwidth does not really expand your 16 gigs to 43 because it is, well, bandwidth. It is like if you reduce your travels to your fridge it won’t magically fit more food, you can’t put an elephant in there even if you just open the door once a week.

What Apple does is very interesting and hopefully gives a kick to other hardware manufacturers too, but their pr is always so over the top it makes it hard to get a real world grasp. M1 was supposed to leave 5K dollar workstations to dust according to internet fans, didn’t happen. I suppose M1X will leave 10K to dust now if it gets released…

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Sun Jun 20, 2021 11:10 am
by Uli Plank
Well, in some aspects the M1 machines leave their own Intel-based MacPro in the dust, which can cost well over 10K…

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Sun Jun 20, 2021 11:20 am
by Nick2021
Uli Plank wrote:In laptops many high-performance GPUs get throttled due to heat and/or for battery life.


I think a 3080 desktop calls for 300 watts of power. That's just the GPU. Laptops don't need to have that much power even plugged in.

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Sun Jun 20, 2021 11:30 am
by Hendrik Proosa
Uli Plank wrote:Well, in some aspects the M1 machines leave their own Intel-based MacPro in the dust, which can cost well over 10K…

I don’t see people jumping from 10K macpros they just bought last year to M1 because it is faster. Those aspects usually tend to be niche, so if you want an actual all around workhorse you hit the wall pretty soon. It always reminds me a quote from ”Master Zap” of MentalRay which went something like ”ofcourse Renderman is fast until you actually trace a ray”. Given that renderman is very much alive, raytracing left and right and MentalDelay is dead gives an interesting perspective though… So I guess we’ll see what happens when it happens.

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Sun Jun 20, 2021 2:20 pm
by Trensharo
Uli Plank wrote:Are you sure it's the GPU? Might rather be the limited RAM that's making Fusion slow on M1.
How much you gain from a 3070/80 depends very much on the task. Temporal processes like NR or optical flow profit a lot, stabilization much less.

It's definitely the GPU, at least in Resolve Studio.

Returned my M1 MBP after like... 20 hours. Really just not something I consider usable - at least for my uses.
Nick2021 wrote:
Uli Plank wrote:In laptops many high-performance GPUs get throttled due to heat and/or for battery life.


I think a 3080 desktop calls for 300 watts of power. That's just the GPU. Laptops don't need to have that much power even plugged in.

A laptop 2060 Max-Q uses a lot less and is still exponentially better than the M1 iGPU. Frankly, I think it's laughable to compare that to a discrete desktop card.

The ASUS TUF in that graph is a budget gaming laptop ($8-900'ish).
Uli Plank wrote:In laptops many high-performance GPUs get throttled due to heat and/or for battery life.

I use a G14 for editing and the thing doesn't even come close to throttling, like... ever. I can edit on that thing all day and it will never be thermally limited. Doesn't even make much noise..

Battery life is a problem on any portable form factor, unless you use mobile components... but those will not give you the same performance (specifically in the realm of GPUs).
Having said that, I am seeing a few posts on this BMD forum and on Reddit where people using a 3060, 3070 and 3080 seem to be saying that even those GPU's don't seem to offer significant advancement over lesser powered GPUs, for example:

Resolve is probably the least-economical NLE someone can use. You get less value out of the same hardware using Resolve compared to something like Premiere Pro or Final Cut Pro. It utilizes hardware well, but the base requirements are too high. So, you often run into a situation where upgrades simply aren't delivering the performance gain you'd expect [for the cost invested in obtaining them]... which effectively creates a yearly upgrade cycle.

Instead of paying for Resolve upgrades, you instead pay for new GPUs to squeeze more performance out of your machine - even while other NLEs fly with the GPU you had 2 upgrades ago - effectively causing them to cost less, despite having higher initial prices and associated upgrade costs.

A lot of upstarts (or people looking to save money) are attracted to Resolve [Studio] because it's "Free" or "Cheap," but that's a very surface level [read: naive] manner for evaluating cost.

----

Unlike Intel in the CPU space, neither AMD nor Nvidia seem to be stagnating much in the GPU space. The only sucky thing is the availability due to the shortages, which has caused prices to double (or more).

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Wed Jun 23, 2021 2:19 am
by rsf123
bentheanimator wrote: if you do the Saver/Loader trick for caching then you tend to make something, cache it, and move on.


Thanks. I've just done more studying up on Loader Saver nodes. To my delight, I've found that using these nodes allow complex compositions, that were previously grinding to a standstill during auto-render, to now play and scrub very nicely.

Learning about loader/saver nodes, for me, means that now there is no more pressure to immediately switch to a PC during this season in which the cost of 3080 GPU cards is still at 2.5x MSRP. I can wait another year to see what Apple comes up with in the M1X, and perhaps the Mac Pro next year.

When the 3090 gets back to MSRP, I might go over to PC if Apple, by that stage, hasn't beefed up their GPU levels to around 3070 level. (The youtube channel Dave2D predicts that Apple's forthcoming GPU will get up to 3070 level).

Re: Apple M1 GPU could take years to catch up with 3070/3080

PostPosted: Wed Jun 23, 2021 2:59 am
by Uli Plank
Not only using loader/saver nodes helps, but in my experience Fusion is more stable in standalone.
You can always use VFX Connect.