Ellory Yu wrote:With all due respect, if this was your point, I agree there are advantages especially then. There's a point of diminishing return in what is technology capable and what the human eye sees. I rather err on the human eyes first.
Even though there are diminishing returns on the spatial resolution that the human eye can see, that only applies to the end product. There's whole bunch of advantages to shooting at higher resolutions than the end product that are beyond spatial resolution.
For example, noise is something that would be recognized in the end product. A 12K source will have much finer noise and it's much more likely for that noise to be of a higher frequency than the highest frequency parts of the image. So applying noise reduction to 12K source before downsampling to 4K will result in a cleaner image than if you were grabbing an noise profile from the 4K source.
Moire has far less to do with spatial resolution of the final image and far more to do with spatial resolution of the recorded image. Down-sampling in camera or in post does a lot of to eliminate moire and avoiding de-mosaicing will at least get rid of color moire.
Then there's the VFX world where higher resolutions mean much higher render time and memory usage but they also mean better tracking, better keys, more isolation. A 12K image down-sampled to 4K is an excellent way to get more of that useful data into a shot without increasing render time and memory usage. It also allows the source footage to share the same sub-pixel accuracy that the computer generated assets have which will help them integrate with each other.
Ellory Yu wrote:As I said, its toy vs. practicality. If I have a good story and shot it today on an OG BMPCC and Netflix picked it up, does it really matter that it was shot on an OG BMPCC? Do I really have to shoot it in 12K and deliver it in 4K and that will make the story visually better? Maybe it is a small percentage of viewers which does not even matter when you think in terms of volumes. That's my perspective and I subscribe to it. I think we can agree to disagree here and it is okay.
This is a tech discussion, not a film-making one. You can tell a good story with pretty much any camera. Hell, 28 Days Later was shot on a XL1 which had horrible resolution and three 1/3" sensors. That doesn't make this camera useless in comparison.
This isn't about the viewer much like the lens, makeup, dolly, NLE, or anything like that is for the viewer. There's a bunch of things that go into the production of films and TV shows that viewers don't know about and if you told them about them, they probably wouldn't care. This is about the process and what things make that process easier for us.
Btw, I have the OG BMPCC. It can shoot beautiful images but those images can also be noisy and it gets noisier as it heats up. Its also very, very prone to moire and aliasing to the point that people bought low-pass filters (that cost nearly as much as they payed for the camera) to try to fix that. It's also so light that when I had lay it on the floor and tilt it up with my, my heartbeat was causing my hands to shake the camera. All these problems that seem little in the context of a conversation on a forum, realistically result in a lot of wasted time, money, or both to try to compensate.
So yea, the viewer may never know that you ran all your shots through NeatVideo, that you had to take breaks while filming to let the camera cool down, that you needed to swap between 3 batteries for a 3 hour shoot, or that you had to put the camera on a wonky rig to support a larger external battery but it does all matter to you and the people you're working with.
So think about it the other. The viewer won't know what went into creating each shot in the end so why not use the tools that let you get that shot in 5 minutes instead of 20 minutes?