Sean van Berlo wrote:I mean, you're critiquing the literal prologue of a video- the second part is like an hour longer and goes into much of what you're complaining about.
One thing to note is that the second part is actually mastered in HD so that he can compare a crop of the videos at one-to-one pixels without resampling. Furthermore, he actually goes into your point about upscaling, correctly noting that nowadays video will be resampled a dozen times through different phases of professional remastering and delivery (your blu-ray is in HD, DCI delivery 2k, acquisition 4k/3k/6k/4.6k/5k/etc.), and finally when watched on a screen that might not be the right resolution for the content. Every time this happens every pixel is completely remapped, meaning that the idea of preservering the 'original' pixels for image quality comparison is already moot, except for on a theoretical level (and this guy is a working professional).
Speaking to that last part, this is also why the images are matched visually (what you're criticizing when you say 'adjusting the light through the lens'): in an actual real-life professional situation you're shooting for that t-stop that gives you a particular Circle of Confusion. It's an aesthetic choice which informs the t-stop you're shooting at, which means that you'll have to adjust the t-stop you're shooting at for the particular format you're using (I think you're probably familiar with this concept but nonetheless, here's an article explaining this concept from, you guessed it, Yedlin:
http://yedlin.net/lens_blur.html). This means that, what's interesting to a professional cinematographer (at the level where it can be accepted that he has the means to light for any t-stop he wants) it's not interesting to see how the cameras compare at the exact same f-stop, but to see how they compare when they're
aesthetically matched. I understand that this is an affront to an engineer like you (at least I think you have an engineering backgroun, correct me if wrong), but this is because you are coming from different backgrounds.
Yes, yes Sean. But it was put forward on it's own, and if people think I'm long winded, this 10 minute video is a classic example. Everything night have been done in 10 minutes instead of waffle on. When I go on, I might be explaining everything st once simply, rather explaining nothing unsimply. I can use the link as an test anytime somebody complains. Is what he has done bad, no not really, testing yes, in a metaphysical (is that the term) way, a chosen style. Is it the greatest style, I would not have thought so. If the technical schools, studios, and periodicals that define the visual look for the audiences people try to emulate (film often looks poor before grading in the old days, but had more performance at the time) do not write like this, that tells you something.
Now, true comparison is native resolution, native throughout (in terms of sensor noise). That would be grade A (what's really happening here) comparison. Grade B would be to adjust scene lighting or in post, maybe in camera through gain control (but again, that might sdjust other things). Grade C would be what was done in the 10 minute video, except of exact same scene and movement filmed simultaneously. See, that is a minimalistic succinct way of putting it. You would show all these grades together. That's is a practically minimalistic rather than simplistic way of metaphysically putting it. Grade A is more for charts and test shits to see what is actually happening. Grade B is an real world extension stress test in matching an image (if you were talking about resolution without regard to noise etc, ND filters wouldn't matter) as long as things like bit range don't choke the image etc. Grade C is showing how you can work the image to seem the same. Now. I'm starting to loose the strands here. But you get the direction.
Now, resizing everything at 4k or 2k, diminishes your resolution comparison even further and means fuzzy pixels. Why are you not either keeping the full resolution until you get to delivery or swapping to delivery resolution once somewhere beforehand (the lesser option)? But this is the most crucial thing about a resolution comparison, to truly show what resolution is doing by itself, you should shoot it on the same camera. This will mean one or two things. As practical sub resolutions are an issue get an 8k camera and 1) Shoot 8k, and downres evenly by fractions to 1/2, 1/3, 1/4 (2k) etc with best routine. But this still will show improvements in the fractional resolutions from starting with 8k versus a native resolution sensor of those fractions. 2) Resize with the camera to reproduce those fractions, but to keep the same lens and ND status you might never reach Grade A because the reframing changes the setup and available light. But you should be shooting a static scene/flat picture anyway, so you can easily adjust its lightning and remove lens setup biases as much as possible. And that is all the stuff I can come up with off the top of my head on the spot. To say something about resolution, you compare resolution keeping other things the same, otherwise you are comparing different camera setups at the same time, diminishing the measured effect of resolution.
Part B. But, it is important to measure how shooting a native camera compares, and doing uneven fractional down and up resting compares. These things are also done in isolation. So, you can say at what range percentage increases and decreases of re-resolution adjustment make the image more pleasing (aliasing, pleasingly software, more sharp etc). You can say here is the difference between resolution x camera and fractional resolution y camera, and after adjustments, and after matching to the image delivery, and blending through several adjustments in a post chain as mentioned here.
Such comparison tests should be difficult hair tearing experiences. Like writing an periodical article which should take a week or so, on the spot live.