Hendrik Proosa wrote:Removing noise won't make the image more accurate because you can't really know if values you produce from noise reduction were really there. It is just a guesswork that has to constrain itself to some kind of temporal and spatial coherence.
No. So if you track all points, or if the scene is static, or for static parts of the scene, the points in the frame contain the same value over frames until they change. So, on average, you have a much better idea of what the true value of these points are. But if you track the true value you can track and figure out true changes as well. Say if an event, a beam of light on an object
flutters from leaves blow in the wind, you not only can figure out the colour of the point, but also accommodate for lighting changes. I'll stop there, because I've got ideas on this stuff I came up with long ago. Anything that is not the original, and not a change, is likely some event or noise. Now profiling noise and the events you can be aware of, you can greatly increase accuracy of of the denoised pixel. I think I came up with an idea of noise profiles with the last 14 years. Notice I never say spatial noise removal, became see that's a trick, which may help a little. There is further spatial analysis I've come up with which may help more, but combining it with a noise profile helps. The vertical noise profile image sensor another thing I identified where you may restore info, as if the point value isn't completely overwhelmed, there maybe a ghost value left there worth restoring, and in using averaging on. The average between the adjoining pixels is maybe more correct, and the ghost value can help. But simply averaging a downscale out (shoot 4.6k render FullHD) will likely still result in noise but at a lower magnitude, as you guys will know. Been thinking of these things for 20 years.
Notice I said accuracy rather than accurate. While it might be likely to get a high degree of accuracy, it is likely to be inaccuracies, but it's better than what we started with.
I'm starting to remember things I wrote a long time back. Thanks.
A few ideas, just to roll the ball on future mega-advanced codec front
- I'd envision a codec that does a pretty heavy-handed noise reduction but hides the fact in restored noise generated in decoder based on original noise profile. Voila, magically small files with super-detail. Just don't reveal any details about regenerating noise
- Downscaled raw. In Reduser forum some people discussed the technical possibility of downscaled (not windowed) raw and most of them stated it is not possible. Have a bit more imagination! Nothing prevents from skipping every other RGGB photosite quadruple to instantly get 2x downscaling. Or one could average photosite data, it is technically practically the same as having a photosite with larger dimensions, plus it reduces noise. The upside of downscaled vs windowed is not changing the field of view.
- Allow a pure raw mode in decoder (not sure any of current codecs have this, maybe cineform has, don't remember) where decoder outputs Bayer data directly for user to poke around and build their own debayer or whatnot.
The Bayer pattern group skipping, will give you alaising issues. You van debayer the local group used, but then you have to do a process to try to guess the missing groups around it, and good debayers are known to use neighbouring groups to debayer the central group. In Bayer, it is very bad, and olpf can spread light far and wide trying to eliminate Bayer pattern deficiencies and pixel pad fill factor issues. On another site, I have posted about doing a bayer version of a down scaled image, and debayering that to restore the diwnscaked image. So, you go from 4k to 2k etc, resulting in 4:4:4 2k etc, chuck away primaries to get a Bayer of the downscaled image, compress that, then debayer that to produce a 4:4:4 of the downscale. Of course, it's not likely to be as good as the original downscale, but there is one step I am leaving out which can make it better. Now, you might be tempted to just pick the red green and blues out of neighbouring groups, as the value of a downscaled Bayer pattern, but a good debayer works out the proportion of a primary colour in pixels of other primary colour (like Arrii does) resulting in better calculation of the average of each primary in the group which you can use for a downsampled Bayer pattern. The using a primary from each 4k pixel instead, gives you samples at shifted positions within the downsampled 2k pixels, likely giving weird strobing anti-alaising etfects. But, when putting all factors into the equation, you could produce a pretty nifty way of doing it.
I don't know why people haven't just looked at data in BRaw files to determine what was going on, it wasn't encrypted.
Who ever said that Red was Jpeg2k, from discussions I have had, it used to be. From discussions I have had, they got a cineform technology license, and that's when it greatly improved. I was once offered a cineform licensed myself, at great terms, but notice that since Red upgraded it's codec, GoPro (owns cineform) didn't do a pro camera, and no other camera company has gotten a license as far as I know of.
However, cineform has gone free, except the Bayer version, as they have a newer standardised codec, which BM could have licensed cheaply. But still, you could record Bayer in Cineform as I've described BRaw as doing it elsewhere. Think about it. However, I'd rather not use Bayer in a phone, and use cineform to compress 4:4:4, or a Linux 4:4:4 ProRes alternative. If only I could get hold of those foveon based mobile chips.