Sat May 12, 2018 7:45 am
Anyway, you are all wrong.
Let me explain. if you had been following the discussion, you would realise that 8k raw produces better 4k and 2k. So a big advantage there. The 8k Nokia camera phone produced much better fhd by doing exactly that, plus better digital zoom.
If it is native 15.5 stops, instead of hdr, that is better than old Arri anyway. Colour science has a lot to do with the amount you can pickup and processing. So, 8k is already reaching out far beyond what people used to work with. This thing might be a monster as far as we know yet.
Native ISO of 800 is tolerable/workable.
For a carry around camera you can use it as a 50mp still (you provide the skill instead of a lot of auto/other functions, but as it's programmable, those other functions are probably coming. (Sorry, I'm getting the 8k, and the 8k example camera mixed up. But if the claims for it are true, it already shows how 8k doesn't need to be poor).
As a carry around camera you can use it to film and also snap off stills. You can do behind the scenes footage and promotional snaps. But as John rightly (actual) pointed out last year, there are issues with pulling stills from footage rather than the seperate snaps I am talking about above. But there are two technical techniques that can do both. One, shoot at a highest shutter speed suitable (in auto shutter) to give the best still extraction, and use software to restore motion blur, producing the desired shutter angle. You could try to get rid of motion blur in an still extraction in the same way instead (using surrounding frames to calculate out motion blur). Good features that could be added to Resolve. Then when you want to do both runaround and stills, you can rely on post software. Actually, calculating out blur in the still is probably better, but not completely reliable.
Another thing I have become aware of, is having the camera record a 8k still during 4k shooting, and using something like Resolve to extract the frame, downscale it, and calculate missing interpolated 4k frames in its place. If a camera maxes out at 8k 30th a second, then during 4kp60 shooting you might loose a frame when you press a 8k still button. So, in this way, you can restore the image in a way that the audience might not notice. They also do post focus from single cameras too. So, another little side use for an 8k, to deliver poster sized pro material (in photography 8k definitely is not the highest you ever want).
So, in reality there are benefits to 8k, but without proper codecs, that camera is expensive to operate. You definitely want 3:1+ codecs. This is where BM can jump in and save the day. But using FPGA is an issue. It needs a lower powered processing system, like the NVIDIA one the zcam e2 uses, that runs at over 1700mb/s and does 4kp120=8kp30. I haven't asked which solution version they used, but if it is the latest one, it could compress raw 8kp30 and like p60. They had been designing arificial intelligence vision credit card sized boards to run six 4k cameras on drones a few years back. They are one of the lowest power general purpose processing chips out. It could be a few watts or more (if they can use the gpu everything else can clock and power down, as well as the unused gpu processing units, and power and clock down dynamically depending on the workload at the core at the time. Pretty advanced stuff). This sort of stuff is dynamically far beyond what Red is using. Red, spent big on their ASIC chips, but NVIDIA spent much more again on their general purpose computing chips, and it shows. I would not be surprised if the NVIDIA beats the weapon ASIC in processing power consumption in real loads. Being a mass market chip, the NVIDIA might well be cheaper.
So, that is the simple explanation.
aIf you are not truthfully progressive, maybe you shouldn't say anything
bTruthful side topics in-line with or related to, the discussion accepted
cOften people deceive themselves so much they do not understand, even when the truth is explained to them