Walter, I agree, that it is up to user to make informed decisions. This is why I also suggest lossy compression if applicable.
Cary Knoop wrote:Sure, please go ahead and describe in detail the detrimental effect of using log in floats.
Lets see scene-linear value range of 0.18 to 1.0, medium gray to maximum diffuse reflection. This scene-linear value range 0.18-1.0 is encoded to Alexa LogC (lets use this as an example) normalized value range 0.39101-0.57063
In half-float storage 0.39101 is encoded in bits as sign 0, exponent 01101, mantissa 1001000001
In half-float storage 0.57063 is encoded in bits as sign 0, exponent 01110, mantissa 0010010000
As we can see, exponent changes by one step here. Mantissa for 0.39101 expresses 10bit value 577 (from range 0-1023 that 10 bits can encode) and mantissa for 0.57063 expresses value 144. Their difference is 592 code values (inclusive of 577 and 144, 577...1023 + 0-144) which is a bit more than 9 bits worth.
In float storage, this scene-linear value range 0.18-1.0 when encoded to LogC produces value range 0.39101-0.57063 which in 12 bit integer storage is value range 1601-2336, which is 736 code values. So when storing 12 bit log data as 16 bit half float, we lose 144 code values, they are fused/rounded to adjascent values. Essentially it turns original 12 bit data to data with about ~11 bit precision, and precision issues are worsened the higher on value range we get, because absolute precision is halved for every exponent change (doubled range is covered by same 1024 discrete code values). Fortunately log encoded values don't cross the 1.0 value threshold in floats, so maximum loss is contained to 0.5-1.0 range (but is unevenly distributed inside the range).
But... lets see for example value range 0.8-1.0, a subsection of 0.18-1.0. This scene-linear value range is encoded to Alexa LogC value range 0.54693-0.57063
In half-float storage 0.54693 is encoded in bits as sign 0, exponent 01110, mantissa 0001100000
In half-float storage 0.57063 is encoded in bits as sign 0, exponent 01110, mantissa 0010010000
Going through same methodology (0001100000 == 96, 0010010000 == 144) we see that this range is encoded using 49 code values. From previous 592 code values we get for storing linear value range 0.18-1.0, 49 are used for 0.8-1.0 range. So value range 0.18-0.8 has about 543 code values to use when we stuff log data to half float, when in 12bit precision integer storage and log encoding this range is 1601-2239, 639 code values. So we are losing about a bits worth of data here. But what about the 0.8-1.0 range itself? In integer storage this is 2239...2336 = 98 code values (inclusive). Compared to what is left in float storage, we lose half of them, about exactly one bits worth. Float data absolute precision follows specific sawtooth pattern and this jumpyness can cause banding and all kinds of funky effects for log data.
Whether it is relevant to user is up to user, but this is the possible detrimental effect.
EDIT: had to revise the piece due to stupid error and thus threw out the more sensational wording
