Probably a deal with Red. David did the bayer video compression and storage before them. You can reassign a patent, but by the wording of Red's it doesn't look like it, it seems to go over the top of what David did. The whole Bayer video recording in itself, should not be patentable. David made it public the next day, describing it, literally, look what I did.
They crafted a codec fast to compress on a cpu, and made software that processed fast on the CPU (the two are different if you don't craft the data/format efficiently, it doesn't matter if you design the processing software efficiently, it will be slower, you are chasing your tail. In my design, I design the CPU and instruction set and data handling efficiently, and work your way up to format and processing software. The present CPU designs for desktop are at least 10-100 times as inefficient, then you add bad programming and you could be going another 1000x slower (more like 10-100x again in our part of the industry). This isn't as good as it sounds, because a lot of it is massive parallelisation at the same energy envelope or transistor density, but often there are lots of limits of just how many parallel units you can use efficiently. So, you are running a heap of cores at native silicon speeds of say 1 Ghz+ (I haven't looked at what the newer node processes run at, but basically everything not true low power is overdriven) instead of say 5Ghz, so immediately you may require more than 5 low powered cores to keep up, and as they are light weight. It maybe significantly more to compensate. But, the routines have to be able to spread across that many cores, but video processing is very parallelisable. I would just design it to accelerate the few cores used. So, 5ghz again, but using 100+ times plus less energy. I did day lightweight. At these levels, it gets very complex, as you can have millions of lightweight cores for one Intel chip, or NVIDIA for streaming, less so with memory. So for the same transistor count you are up in energy consumption due to more active transistors, and likely concentrated spots. Exciting stuff. Fine balancing acts.
So, I had suggested to David to use GPU processing but he didn't like the idea. At the time the GPU instruction sets and memory handling were very primitive. Though I thought they could help in a collaborative manner with the CPU in sort of batch processing, swapping data in and out on a clunky pipe line. But, now, they could run cineform on the GPU (they run Red's on gpu's). I suspect it wouldn't be any slower than Braw, but maybe a lot more parallelisation than Braw currently uses (if Braw happens to be depending on dedicated jpeg hardware in the GPU. Which means it would run hotter, even a lot hotter). Cineform RAW Bayer licenses cost $10 or $20 per unit I think. I think it's meant to cover you, but don't know. If BM would allow us a FPGA sandbox like I've been asking for, or at least the option to have cineform raw in the driver and activate it by paying the licence to cineform....

Or. Cdng (in the MXF container format for processing, and pay the fee (to Red) some people will be happy. They could even keep the firmware with it seperate and keyed, to get when you pay. I miss cineform, and David.
Cineform wasn't the first such format, but others had problems retaining image accuracy through reuse in post. They did a very good job, the way it's meant to be.