https://www.newsshooter.com/2018/12/14/ ... -playback/I think what they mean is that it requires that much power to process their 8k. Apply the same principles to 8k Braw, then it maybe only require a single CPU and a cheap card. Braw works on JPEG type compression, which usually is more supported in hardware, leading to higher speeds, and no encryption on your footage.
It is good Red is working with Nvidia (like I wanted them to many tears back). They apparently have visually lossless codec technology as advanced as h268/269, but still a half to a 1/4 the compressibility my old simpler codec proposal was aimed at (the more advanced one is a truck load of effort and patents). The industry is trying to move to h266, so are generations behind. If they get Nvidia to implement that on their GPUs, it will be good for the industry. Red is trying to move to own it in the industry, but don't seem to know his to do it. But people like bluster, and mistake people with talent as bluster, and people with bluster for talent. Investors should demand talent, and sound out all noise, to discover who has the talent to go forwards. Useless having as much bluster as a vertical take off and landing rocket exploding rather than somebody with the talent to figure out how to do it safely for a mere smallest fraction of the price. One's got direction to send the other broke, the other is foolishness.
This reminds me. Last week I realised my decades old proposal to design a super cheap millimeter thick handheld super games console is applicable to manufacturing low volume pocket cinema cameras a few millimeters thick, 8k 16k whatever, 3D multipoint. The trick is you don't need a high speed circuit to do most of it, so ultra low powered. I have long had interests in how to replace FPGA with in house fabbing. You could do the whole board in one hit, really cheap. They maybe able to do such a thing on an old low energy line now cheaply and do something, as a sort of onchip multi module design, where it is one thin wafer that you stick on a carrier board. On of the issues now days is that rather than making those interfaces flat on board connectors and put the females parts in the cables, they do the opposite, limiting the life of the main product sure to interface fragility. If it wasn't for that, it would he simple and cheap to run interface lines to the boards edge, making a simpler manufacture process. If it wasn't fur silicon's potencity to oxidise, you could even just halve the wafer slice come out with lines, but there is another solution, but no giid interfaces to use it on.
So, yes $100 pro 8k+ pro cinema cameras, are possible with terabytes of memory onboard, pretty much now if the equipment had been setup and paid down by now (a lot of this could have been done previously, using older tech, in matter of fact, decades ago). With multipoint. On second thoughts, the sensor quality, and lens system, is the issue though. So, the main camera for $100 plus sensor and lens, unless a good sebsir can be made on board.
FPGA, is a curious thing, I could make an gatearray product that holds a gate state with no power, and turn power off to all unused gate choices for a field programmable circuit approaching the power consumption and speed of a custom circuit (maybe half the speed with similar power), cheaply. Re-edit: My brain working again, I think there is an additional way to get nearly full asci speed. Speed of what chip process is another thing, are we talking about 5GHz end performance, or 500Mhz. I would like to think about 15Ghz, which might be possible given a simpler enough design (individual parts work way higher than the resulting processor, but clocking and different performance bottle necks slow it down, as some parts are just slow, and the accumulative delay in the circuits overwhelming. There is silicon transistors out there that can switch 250Ghz, for a long time, but you can't get the complex circuits using them to go that fast, and I don't know if those lab transistors are out in the wild yet, or if they are commercially viable. The ones in high energy processors are likely to be fast but not that fast. I remember, in the early or late 1990's one of the guys had a processor and using a simple transistor circuit running at 12Ghz (or was that 1.2ghz) or something to sample analogue to digital on a processor which ran at a 12th or 24th that speed (depending on which processor it was I'm thinking about and which figure). Very exciting, except that raw speed duesnt transfer through. Its like magnetic quantum cellular automata. The technology of QCA in general is supposed to be 1-2 terahertz, but the technology is looking at 500mhz+ for the chips (enough for 5cmx5cmx5mm 8k cinema camera with storage in built, even 32k, as it runs at one millionth the power of a conventional chip so you can pack a wafer full of circuitry in there in 3D). The magnetic QCA FPGA and chip technology has morphed into another technology to do with magnetic computing in general. However, it might be slow, but for many uses fine, I don't know why somebody hasn't done one yet outside the lab or outside announcing they have, would make the cameras a lot cooler. You should be able to fashion a sensor pit of thus using nano structures. However, the technology I'm looking at doing I hope a minimum of 2-5Ghz, and up to 2 THertz. But I'm looking at a further technology to greatly improve performance and efficiency, striving fur close to 100% under physics. Grain of dust sized 8k camera territory (the real truth you might not believe. Its way more than any of this other stuff).