ilyamarcus wrote:That is more attributable to coarse quantization error?
What do I mean by this?
We have a sampled system in play as a practical technical solution for media transmission. "Digital" does not mean "perfect" or "bullet-proof", or even "accurate."
What it
is, is more immune to corruption, but once a numeric value has been determined to represent the characteristics of a pixel, it also gets a kind of inertia, especially with respect to its neighbours. Quantizing is a bit of a black art, and there are several strategies for determining a pixel's defining values. If a scene is organically random, what the value of a "next" pixel might be should be considered to be potentially random. Accurate processing times would be prohibitively long to allow for the high-bandwidth display speeds we currently support. So there are chroma-subsampling algorithms, among others, that allow for "best-guess" and error-checking to get a "better" result, faster. Not necessarily "best." Its an engineering assessment.
Consider the rungs on a ladder. Nice, predictable distance between the rungs. Let's say there are 8 steps. Standard riser height is about 11-12 inches. That's an agreed-upon step distance that humans feel comfortable with. That will yield an 8-foot ladder. Might not get you up on the roof, which might be 16 feet, I don't know, maybe you have a deck? In video, this might be the distance from HD to UHD. WE don't just want to stretch the ladder with its existing 8 rungs... the steps would now be a riser height of 2-feet... we obviously need more steps (extension ladders, etc., but this is now a 16-step unit) to achieve a smooth number of level differences.
As far as bit rate goes, Long-GOP efficiency is only as good as the image complexity. The gross effects of "rolling shutter" with its warped objects is a related side-effect of not taking the whole image into account at any instant. None of these strategies can work without dealing with both spatial and temporal factors and their relationship, simultaneously. Getting a good compression is like trying to work a speed/distance correlation with a fixed/given energy resource, where the fuel consumption (which determines range) is dependent on velocity... in that case, the math is not linear; its power-ordered... twice as fast needs 4x as much fuel, but only halves the time... that sort of thing.
jPo, CSI