Andrew Kolakowski wrote:Solid color? You need ramp, not a solid color to test banding.
Yes. It is relative. I mentioned solid color because output is relative to the input. Quality is correlated to 8-bit but it depends on the context. Its like when people claim you need the biggest color space with biggest color gamut available or you destroy the image. Not if you are working black and white.
Having banding in the source footage, won't make it magically go away if codec for cashing is 10-bit, but it might help not make it visibly worse. On the other hand if the footage is not heavily compressed to start with and there is no scenes with demanding gradations, 8-bit for temporary cashe should be no problem at all.
Banding is not ever present or ever visible phenomenon, it is seen when available number of shades is not enough to represent image which requires smooth gradients. Noise or film grain or dither will of course hide the problem in many cases as well. And there needs to be correct processing pipeline in place or the problem will show at the weakest link in the processing chain. Including if the original footage already is showing banding.
"Internally to DaVinci Resolve, all image data is processed as full range, uncompressed, 32-bit floating point data. What this means is that each clip in the Media Pool, whatever its original bit-depth or data range, is scaled into full-range 32-bit data. How each clip is scaled depends on its Levels setting in the Clip Attributes window, available from the Media Pool contextual menu.
By converting all clips to uncompressed, full-range, 32-bit floating point data, Resolve guarantees the highest quality image processing that’s possible. As always, the quality of your output is dependent on the quality of the source media you’re using, but you can be sure that Resolve is preserving all the data that was present in the original media
All content in the DaVinci Resolve Fusion page is processed using 32-bit floating-point bit depth, regardless of the content’s actual bit depth. "
If the data being processed is not already showing signs of banding and you are not forcing the banding by force, it should be fine, especially if footage itself is not compromised.
Caching is what some programs call rendering, or temporary rendering to speed up playback. As I've asked the OP, what is the need for caching? Just playback performance or something else. These things should be considered before someone chooses a particular workflow, and there are quite a few variables.
Playback performance can be improved in variety of ways and for verity of use cases. Usually if you need quality you don't need timing, and if you need timing you don't need quality, and if you need both there are options which can be used at the expense of processing time and file size.
While smaller formats take less room on scratch disk, there are two good reasons to use higher-quality formats for creating cached media and they are not because of banding.
Preventing Clipping: format you choose will determine whether out-of-bounds image data is preserved when the signal is optimized. If you find that image data (typically super- white levels) are clipped after optimization, you should switch to 16-bit float, ProRes 4444, or ProRes 4444 XQ; in particular, any of these three codecs are appropriate optimized formats for HDR grading.
Preserving Alpha Channels: format you choose will determine whether Alpha Channels will be preserved, if they’re present in the clips being Optimized. Currently, the Uncompressed 10-bit, Uncompressed 16-bit Float, ProRes 4444, ProRes 4444 XQ, and DNxHR 444 formats preserve alpha channels.
If there is no need to worry about clipping, and no need to preserve alpha channels, than why use these formats that will requires a lot of storage space for process of temporary rendering, aka caching. ? If there is no need to for maximum image quality at full speed, why do rendering before rendering is done in final delivery? Caching can be turned on and off at any point and there are different mechanisms of caching and which ever one chooses the nature of it is that only really can be displayed after it has been cached and no new changes are made. Most of it is temporary anyway. So no real harm is done to the files themselves. Even if one works with banding, it doesn't mean the final file will also have it.
Most of the color grading process is not full of motion but working on shorter clips that are static, so playback performance is not as important as for editing and for editors quality of image is less important and what matters is real time playback performance. Its fairly rare that both are needed at the same time until the final render. But there are times when one might use either HDR workflow or have alpha channels with transparency where some of the higher level formats make more sense. If its just for playback, I would rather use lower resolution timeline or change timeline playback performance where I don't have to render duplicates or wait on rendering. While most transitions and other effects can be cashed separately.
The only other X factor is re timing, especially with optical flow where caching can be useful. But even there, instead of banding what really matters is interpolation artifacts and ghosting issues, which will show up in both 8-bit or 10-bit.
Delivery of final footage from the deliver page is a differnt matter of course. That too is also dependent on the platform and what other processing might be done.