Rakesh Malik wrote:One step at a time... first get rid of ETTR, then learn ratios (that's how I'd teach people to determine exposure when they're starting out).
ETTR and lighting ratios are two completely separate tools for two completely separate purposes (camera/absolute exposure, i.e. what your camera captures at and relative/lighting ratios, i.e. how your scene ends up looking), and can quite happily coexist, but neither replaces the need for the other, nor knowledge of how you want your final scene to look. The problem seems to be is that, as I stated, folks inexperienced with one or both often conflate the two or think ETTR replaces the need for the others, which is as I stated quite false. However, I'd imagine it is very difficult to decouple these different exposures after shooting in a certain film-optimal style that conflates them.
On the other hand, camera exposure, for a digital sensor, is all about capturing optimal data,
THAT is the misconception. The goal is to make a movie.
...that looks fantastic, and while most of that is down to lighting, acting, set design, etc., the logical extension of the argument that setting technical/camera exposure to a non-optimal value for the highest quality image capture is not only not to be encouraged, but to be actively discouraged, equates to telling folks that not only does it not matter that they are shooting in an 8-bit compressed codec, but that it is preferable that they should (and to be fair, there's plenty one can do with that given sufficient lighting, precise exposure optimal for that format (as close as possible to the final product) and, of course, skill and creative vision/direction. But will it produce the best results over a wide variety of scenes? No.). Is the IQ difference usually as substantial? Generally not, on a good sensor, and like you say film-legacy exposure techniques usually deliver good enough results for the best digital sensors. What I don't understand is why you are actively telling people that not only are they fine with good enough (which they may well be), but they should avoid anything potentially better?
ETTR usually ensures that exposures don't match from shot to shot. If the skin tones don't match from shot to shot, it's jarring for example.
As stated, ETTR only determines
Camera exposure, not desired final exposure or lighting ratios, which are effectively independent of the ETTR determined exposure if you ETTRed correctly so as to not clip highlights and have the highest possible SNR for shadows. I still don't see how ETTR has any effect whatsoever on the relative exposure (lighting ratios) of your scene, other than ensuring that they are captured properly and don't clip, and thus why, in post, matching two ETTRed scenes to whatever you would obtain via a film-optimal exposure technique is any more complex than setting your raw exposure to whatever renders skin tones, say, or a grey card in the appropriate range.
I don't see how determining exposure correctly requires any more effort than ETTR.
Fair enough, if that's true for you I'm not questioning it. But if that is really so quick and simple, than why is it any more complex to utilize said consistent technique you determine to be "correct" for the final image in post, given the above?
The only difference is that you have to have some idea about how you actually want the image to look in the end. ETTR is great if you really don't have any idea about what you're seeking to execute with the final image, which is why I never use it.
Like I keep emphasizing, ETTR does NOT obviate the need to determine your lighting design, level and ratios/relative exposures, nor anything else a cinematographer does on set to determine the look of a scene. It just captures all that hard work in an optimal way. ETTR is a technical tool for optimal camera exposure; just like I can shoot crap on an Alexa if my shots lack purpose, direction, impact, good lighting, etc, etc, ETTR doesn't make a poorly lit, boring scene look great (though it does maximize your ability to make something out of what you have, that can't ever fully replace any of the above).
I learned without it when I was limited to 3-5 stops of latitude when shooting color, and I had to get the image right when I shot it since I didn't have a preview to check.
Right, you learned a camera exposure technique honed over many decades designed to produce optimal results on a medium with quite different characteristics than the one we are discussing. Like an artist trained on canvas, and eventually switching over to murals, you can certainly still produce acceptable results using the old brushes you are comfortable with, and all the same principles of art, design, painting, etc. apply. However, the medium is different (canvas vs. a wall), and thus a different brush, or even paint application tool entirely might be more appropriate for the medium. I certainly don't mean to criticize those setting camera exposure by some other suitable method, particularly given they can still get excellent results thanks to their skill and familiarity with it. What I seek to understand, however, is why they choose to criticize those who do use paint applicator that is designed around the strengths and limitations of the medium.
Several explanations for why one should use ETTR were based on low vs high order bits, based on the belief that high order bits are used to represent big numbers and low order bits are used to represent small ones, which isn't true. A sensor with 16-bit ADC uses 16 bits to quantize every value. The byte ordering isn't relevant... unless you're writing the code that handles the quantization or the code that handles image processing, either in camera or in post.
I'm not referring to its impacts on read noise here (i.e. higher signal levels at the ADC), but rather optimal output data storage in a linear output format--it is the reason that we don't record anything without a gamma curve, whether log raw or compressed 8-bit, unless higher bit depths are employed to partially mitigate it (i.e. 16+ bit linear raw). This has nothing to do with byte order, but rather how channel values are represented in linear space. If we set our clipping point in a channel to be 0x10000, then a value one stop below that is, then, 0x8000, then 0x4000, 0x2000, 0x1000, 0x800, etc. The implication is that, while there are only 8 possible graduations between 0x10 and 0x8, 12 stops under clipping, as opposed to 16,184 between 0x8000 to 0x4000, one stop under clipping, thus allowing much less subtle graduations of color and brightness in the shadows as opposed to the highlights, and minimizing quantaziation noise and banding. This comes into play for all the still camera raw formats, as well as Sony raw, Magic Lateran raw and linear Arri raw.
However, since log raw distributes these bits more evenly, this point is much less relevant, though likely having some small impacts further up the processing chain, after ADC and before log conversion--for example, the Ursa Mini's sensor outputs two 11-bit channels per photosite, which are then combined to produce a 22-bit input, which are processed and converted to the equivalent of 16-bit linear raw (where ETTR might matter a little), the appropriate log-like curve applied, and finally output as a 12-bit log DNG. Although this particular effect is unlikely to be that meaningful for log, it does not negate the benefits of ETTR at maximizing image quality/DR off the chip (photon noise, FPN, dark current noise) and at the ADC (read noise).
The way to make it easy to match skin tones in post is to get them right in camera in the first place.
That doesn't exactly answer my question. Again, I ask--what additional steps in post must be done aside from setting raw exposure off skintones, a grey card, etc. to make an ETTR image match another, or one where the same technique was used to set camera exposure?
That doesn't necessarily match shots, though it will most of the time get you in the ballpark.
Indeed it doesn't for other aspects, but I fail to see in what way it doesn't achieve the same effect as setting your camera exposure for the same point of reference (grey card, skin tones, etc). There might be other issues in matching the shot, but your overall camera exposure doesn't play into this. For real time monitoring, moreover, setting the appropriate display ISO should put you within half a stop of your final product, without affecting your optimal image data from ETTR.
If you're striving to overexpose your image, you're pushing highlights closer to clipping. That should be obvious.
Your highlights may, in most scenarios, be closer to clipping, but note I did say "ETTR
properly." A correct ETTR, or even one with some headroom left just as a safety margin, will never clip desired highlights, since technically you aren't "striving to overexpose your image," per say, but rather exposing as high as possible such that your highlights
don't clip, with whatever margin of error you're comfortable with. On the other hand, if you just set camera exposure for skin tones, highlight clipping might or might not occur, unless you perform a full zone system analysis (which, in reality, we incorporate the most relevant aspects of in ETTR, by setting our exposure so that the brightest desired tones in the scene fall into the brightest zone point in our final image, while also keeping the darkest desired tones in mind).
There are situations where traditional exposure isn't possible; if you simply don't have enough light to bring up the exposure to where you have it in every other shot for example, then obviously you have to do something else.
Of course, no general guideline for exposure, whether film- or sensor-optimized, will work perfectly in every edge case. That's why blindly ETTRing is no better than blindly using any other method, which I don't think either of us is suggesting.
It's the opposite of getting the exposure + ratios correct
ETTR, strictly speaking, *is* the technically correct
camera exposure for a typical (non-extreme-DR) situation. Shooting raw, what you call your grey point can be rather arbitrary; your white point, however, is not. And, as explained multiple times, it has nothing to do with and is not a replacement for getting your ratios correct, it only allows you to capture them optimally.
[It] doesn't save any work in production, and adds work in post.
Only the amount of time/effort it takes to ETTR, which is very little indeed especially with an aid or two (or with a press of the iris button with an EF lens on BM cams, though I'd prefer to do it manually) and assuming you as fast at setting exposure in camera and with a meter than in post. Like it say, it may not always be "worth" the small amount of extra time, but if you are putting so much into the rest of your production, it certainly can't hurt to spend a second capturing it optimally.
provided that you have a vision for the project at hand, but most "cinematographers" these days don't, so they rely on ETTR because they don't know what look they're going for.
This is, of course, a major problem, but has nothing to do with ETTR. It's a product of people assuming that gear or accurate exposure settings will make up for a lack of vision, purpose, or telling a compelling story in visual imagery, something I run into all the time with my photographers for the magazine I work for as the head visual media editor. In particular, we had some problems with two of our new shooters last issue; one delivered shots that somehow had either too green WB inside (in the delivered JPEGs, he shot raw originally), but that exuded an effective, purposeful visual story; the other delivered technically competent material but lacked a clear purpose for most of her shots. Guess who I kept on?