MishaEngel wrote:ECC-memory, ECC-GPU, certified drivers for SolidWorks, Creo, SiemensNX, AutoDesk, etc... For the user base of Quadro and Radeon Pro cards, the cards themselves are a small fraction of the cost (software cost are a lot higher and so are the cost when something might go wrong because of a bit flip, hence ECC).
When you're using them in compute applications, ECC makes a lot of sense. It's pretty much a requirement for a data center, render farm, HPC cluster... but for individual workstations, it's overkill.
Apple's PR stunt with VA Tech lead to a lot of discussions about this. Memory these days is quite reliable (and that was back in the late 90's), so with a single workstation with four DIMMs you'd be looking at an average of one memory error a month under continuous use. For us that would mean a bad pixel or two here and there. But with 1100 workstations each with 4 DIMMs, that brought the average up to one memory error per HOUR, which made the cluster unusable for its intended purpose (large scale clustered simulations).
For our sorts of use though, at the workstation level, ECC is overkill. That's probably why there's such a significant premium for it, the market for it is pretty small. But if you need ECC, you can't do without it.
To be honest I'm not even sure that ECC would matter for most renderfarms; you'd get a bad pixel now and then if you ran a months-long render, but we don't do simulations where that one bad pixel will end up propagating into the next frame.
But if you can afford to build a render farm big enough for memory errors to matter, then you might as well take advantage of the peace of mind you'd get from having ECC, and in any case the systems that come with service contracts are designed around the assumption that they'll require ECC, so they all have it.
Even though most of us don't and never will, because even when we're successful, we aren't using applications where it's likely to matter enough to justify the extra money.