Page 1 of 1

Why not CUDA instead of OpenCL?

PostPosted: Mon Aug 21, 2017 3:01 pm
by Caryl Deyn
It looks like OpenCl is slower than actually turning the option off, and when I check it looks like when it is on it won't use my two 1080ti just one, why not use CUDA like Resolve and allow the use of two cards?

Re: Why not CUDA instead of OpenCL?

PostPosted: Mon Aug 21, 2017 7:31 pm
by michael vorberg
OpenCL is hardware independent and works on any CPU and GPU, while CUDA is nvidia GPU only.

this would blacklist a lot of users from the benefit of the acceleration (looking at any recent Mac here)

Re: Why not CUDA instead of OpenCL?

PostPosted: Tue Aug 22, 2017 1:48 pm
by Aurore de Blois
michael vorberg wrote:OpenCL is hardware independent and works on any CPU and GPU, while CUDA is nvidia GPU only.

this would blacklist a lot of users from the benefit of the acceleration (looking at any recent Mac here)


-very true- however, for those who do have nvidia cards, it would be awesome to have this as a setting option in order to take advantage of the hardware- because as it is now, that blacklist is working in reverse.

my laptop came with dual 980's in SLI mode, and the only reason my desktop doesn't have 2 titans isn't because of the cost but because fusion won't use 2 cards.

it's been a big wishlist item of mine for a long time now :D
Au

Re: Why not CUDA instead of OpenCL?

PostPosted: Tue Aug 22, 2017 3:01 pm
by Chad Capeland
That's not an issue of OpenCL, though. You can have multiple devices running OpenCL since version 1.1, I think. CUDA won't fix that.

Re: Why not CUDA instead of OpenCL?

PostPosted: Wed Aug 23, 2017 9:02 am
by Sam Steti
Chad, I think you perfectly know what she's talking about, don't you ? ;)
It's not about Fusion which cannot parallel computing, it's about performance of OpenCl vs CUDA on the same cards (a topic we've already discussed)

Re: Why not CUDA instead of OpenCL?

PostPosted: Wed Aug 23, 2017 1:30 pm
by Chad Capeland
Yeah, I know what Aurore's talking about because she and I have had this conversation before. The major limiting factor isn't OpenCL vs CUDA, which is ~10% difference, it's the fact that GPUs 1-5 don't get used at all, which is ~500% difference.

Re: Why not CUDA instead of OpenCL?

PostPosted: Thu Aug 24, 2017 3:00 pm
by Sam Steti
Chad Capeland wrote:Yeah, I know what Aurore's talking about because she and I have had this conversation before.
So did we too; and I agreed on writing the whole software from scratch for cuda support is not a single button to tick, but I didn't agree about the rest ;)
The major limiting factor isn't OpenCL vs CUDA, which is ~10% difference, it's the fact that GPUs 1-5 don't get used at all, which is ~500% difference.

1/ 10% difference is your point of view. I often saw more than 20% on well done tests;
2/ I'm no cuda fanboy and if I was one, I'd probably chose what's more open and widely spread, not something proprietary and closed. But as a pro, time for religion warfare like ati/nvidia, pc/mac or old stuff like that is over, and performance is the 1st criteria. Therefore, sorry if I notice that cuda is often better in the same conditions;
3/ Rony said a second GPU could be dedicated to specific tools actually, but so far I don't know how to do it

Re: Why not CUDA instead of OpenCL? feature request!

PostPosted: Tue Dec 05, 2017 1:49 pm
by Aurore de Blois
with graphic card power doubling every year, it means power galore. Fusion's had reams of GPU access for many years now, but limited to a single card. since v5 added the 3D environment, everything I have done in Fusion has been in testing its limits with 3D scenes.

who wants to see dual graphics card capability in Fusion?
-even if it is taking place only within the 3D environment - that would be something of a major advancement in power.

in my spare time I have reset this experimentation with a massive 3D scene and would give anything to upgrade my GPUs for two of the latest and greatest to improve performance. I cannot be alone in this :P

Re: Why not CUDA instead of OpenCL?

PostPosted: Tue Dec 05, 2017 8:20 pm
by Chad Capeland
Accelerating one viewport with two cards with OpenGL isn't trivial. Two viewports with two cards isn't super useful, but is more plausible.

But the real bottleneck for many users is the 3Rn, and that's a lot more straightforward. First, loads of comps will have multiple 3Rn anyway. And those 3Rns may have multiple shader replacement passes. And some of those passes will have motion blur or depth of field. So right there you have multiple opportunities.

Something a lot of users don't know, though, is that the renders are tile based. I don't know what the default is, but you can make the tiles really really tiny, like 32*32. There's two benefits. One, for high resolution renders you don't have to have a buffer larger than your GPU supports. Two, you allow a render task to finish before the watchdog thinks your software froze. If those buckets were distributed over multiple GPUs, you could get amazing speedups equivalent to rendering 1/n resolution even with worst case optimizations listed above being all missing.

Re: Why not CUDA instead of OpenCL?

PostPosted: Wed Dec 06, 2017 2:34 pm
by Rico Hofmann
I use Fusion because it supports Open CL and every GPU. No CUDA please!