thanks bryan for your very useful extended list of comparisons.
it's really useful, because objective tests and comparisons of intermediate codecs and their actual implementations are hard to find. and i think, not only me felt surprised about some of you findings and the common prejudices falsified by this empiric audit.
but i still have to express some reservations against this general approach.
you already mentioned the troubles concerning different interpretation of video levels as one obvious source of troubles related to this kind of intermediate transcoding. but that's just the tip of the iceberg. there are in fact lots of similar well known issues to face. losing timecode and metadata information, off-by-one frame differences, audio track arrangement, etc. -- the list is quite extensive.
that's why i think, we should better see it just as a temporary workaround, not as a desirable or exemplary improvement in our workflows.
sure -- i also use this kind of external conversation a lot in practice, because resolve is very unreliable and hard to customize concerning optimized media and caching. but from a technical point of view, it looks utterly clear to me, that this isn't the way, how it should be done. it's much more desirable, that applications are able to read and optimize all kinds of ingest footage, to minimize conversation related troubles. it's also the most promising solution, to really optimize the data in the most efficient internal representation for later use in a given software. all this other crazy complex workarounds simply can not do it any better.
but as we can not change the world and somehow have to work around resolves actual limitations, your measurements and suggestions indeed make a lot of sense in practice right now.
Bryan Worsley wrote:John Paines wrote:At this point, lack of support for the files doesn't appear to be unique to Resolve.
That's consoling
And the 400Mbps firmware update is still to come.
And that probably is the reason why:
https://forums.adobe.com/message/9444058#9444058quote "...this gives the software companies time to program changes so the media works"
no -- i can not agree on this wishful thinking!
for most h.264 implementations it doesn't make a big difference, if all-intra or the long-GOP variant is used. it's only the 8bit vs. 10bit difference, which leads to troubles. and this is a really challenging issue, because most hardware accelerated implementations (e.g. the quicksync features for intel CPUs) can not simply updated for 10bit use!
that's why i think, it's more consequent to use h.265, if 10bit resolution is needed. in contrast to most common h.264 acceleration solutions, 10bit is widely supported in this h.264 successor. but the same limitations and incompatibilities will happen, if 12bit h.265 suddenly becomes more popular.
10bit h.264 was always a very troublesome exceptional phenomenon. sure, it's used in XAVC and AVC-ultra since quite a while, but beside this rare high end applications it wasn't used much often and is very bad supported in common software. you still have to compile x264 with different preprocessor directives and switch between both resulting libraries, if you want to have access to both possible bit depths.
when the first GH5 footage appeared, i was quite surprised, that most of my daily used software was able to handle this kind of stuff -- because this kind of .mp4s where only used in the manga subculture for long time, but not by any serious consumer product. i really didn't expect, that it would work in any ordinary software, but i was wrong! in fact, it's much better supported then expected, but the mentioned limitations of most existing hardware acceleration possibilities draw a clear frontier of efficient use.