8k Mobile reflections.

Got something to discuss that's not about Blackmagic products? Then check out the Off-Topic forum!
  • Author
  • Message
Offline

Wayne Steven

  • Posts: 3362
  • Joined: Thu Aug 01, 2013 3:58 am
  • Location: Earth
  • Warnings: 1

8k Mobile reflections.

PostMon Aug 03, 2020 10:16 pm

Well, most phones are through and it's all dissaptointing, as they are all compromised somewhat for 8k use.

Codec datarate:

Not one has a proper video mode that will produce top end consumer quality equivalent to highest datarate fullhd Bluray, but in 8k. They are largerly hobbled to 80mb/s or less quality, broaderly equivalent to a 5-10mb/s fullhd h264 file, depending on if h264 or h265 is available. In reality you need double that rate, and Quadruple would be great.

Various software tries to boost video recording datarates. I have a phone that records fullhd at 17mb/s, even though it has a well over 4k wide sensor, it's being limited (some software supports trying to enable higher resolution). To my surprise, it will do up to around 36mb/s fullhd (I actually was trying to set up 36mb/s 4k, but only the datarate increased). The sensor video modes, hardware codec video performance are restricted, along with all the other restrictions below. So, 8 bit/10 bit, low bit rate, reduced frame rate, lesser component video modes, heavily negatively processed video image tends to be the only thing presented.


Video bit depth and type:

Well it's hard enough to get 10 bit hlg let alone I think anything 12 bit. I was content with 4:2:0 but since finding out they deliberately botched this up, I give up on that as a credible option (Allegedly to make it easier to transcode to it from 4:2:2 video, and ruin the image. Get a life!). However, 4:2:2 is not great either. That leaves 4:4:4 and single chip sensor mosaic like patterns, like Bayer, and unfortunately due to, in my view, not credible patent claims, everybody seems too scared to implement this. However, in 4:4:4 you only have to preserve the known colour per site, and just stick to the approximated curve function for the other colours, which people think Braw maybe doing. Those extra components then come at a much less storage cost then a real 4:4:4 colour filled image. Now, if you set up an elaborate scheme of overlapping windows of resolution, recording differences between them, you get a wavelet like lower resolution extraction. Somebody should try that. The other thing to implement is an overlapping curve function, so the ends of each block match up with the start of the next block, as implemented in the microsoft photo format that was adopted into an old jpeg standard designed to give it similar performance to jpeg2000 at less processing. In my design on 3D graphics we get the same issues, of continuous surface meshes compared to discrete. In a famous cube system, every thing is designed in a cube layout but every cube has a customisable plane in it, so like how jpeg does it, but my system is more like wavelet in ways. I actually had to come up with a new wavelet like system for this, but it's much different otherwise. Live action data normally becomes unmanageable this way, that is why the block system reduces load. Now, accuracy of the curve and curve ends is another issue. Moving to an integer basis with sufficient precision to accurately depict things is another aspect I suspect is beyond normal set function of consumer video codec circuits, but this level of accuracy of level is probably beyond what is needed for the level of video shooting we might normally need on a phone. But the overlapping ends we definitely need, as it allows higher compression ratio without macro blocking.


Video processing accuracy:

Unfortunately, a lot of high end phones come with harmful video processing to jazz up the image in a bad way. It can be hard to get to a level that isn't bad. Though I think it depends on the phone, and implemented features. Their is a version 2.0 of the android camera API, but it appears that manufacturers are not required to implement it, or to implement it all. It is hard to know exactly which phones come with fuller support of the camera 2 API, plus the API is not as good as custom drivers, which app software would have to be built to specifically use, and not many manufacturers would offer, and sometimes they are worked out third party, so a mess of support. That Apple cinema camera app (memory playing up) has a list, which is mainly Google pixel phones, but there would be more out there I would imagine. However, that app attempts to correct image processing function from memory, implement it's own functions for increased dynamic range and flat and other picture profiles, on Apple devices. I don't know what the support is like on different Android devices, or Android at all. Android really fails in not having a minimum worked out video function for various chipsets and sensors, sorted out for manufactures to implement and expand on higher end phones. That would set a minium functionality lots of phones can use, and state support for with a tick. They could roll such code into the Android package to be implemented when the development system compiles in a system with matching hardware, making it nearly 0% effort on the part of the developer. Even better, every sensor manufacturer could be approached to implement and customise the code, where android development receives it and expands on it as needed, but doesn't reduce its legitimate functionality (but manufacturers can publish their full versions that could be loaded in by user, or manufacturer, aand compare to). The same can be done for camera 2 api. What I'm talking about is functionality the next camera API could have. A 100x better situation. But what would I know?


Real time internal memory and storage paths:

This is another big weakness of Android. The handling of data transfer and files is abismal. I can't even copy heaps of files of the device without extreme slowness and issues with file integrity. It used to be better when they had a proper file protocol over USB. But since they changed that, and everything has become encrypted, it's become a bigger issue But the underlying realtime operating system structures and file handling still seems to be an issue, with very few android devices promising to do it better. I've tried to raise this with Android before, but seemingly with no real success. Some things they will do within months, some other things they never seem to implement. I imagine the encryption is a major part of current file performance issues aswell. Can developers reserve sequential storage space to be written to uncompressed? There is obviously security issues with this, but they could make it so that any video file being written at over 100mb/s, can be written unencrypted when set by user. Slower files should be more handable by a streamlined storage system, and sequential storage pre-assigned for writing. Obviously, at that datarate spyware would be less desirous to download it to their systems. To download these from say 10,000 users, would require quite a setup, 10,000 1mb/s feeds is much more handable by a system. So it reduces the footprint of people likely to use the feature, and with the user gated approval process, and their realisation a lot of data is happening, and the phone is slowing down and heating up, to promote the user to action, it makes it even more undesirable for hackers. At that size a single hacker is less likely to aggregate your feed in with others, and less likely to pay Individual attention rather than look at lots of people to find something interesting happening. A few built in alerts for high traffic, processing, storage use load, and camera and microphone alerts, would go a long way toward altering users to hacking, aswell as physically connected sensor and microphone lights. But most of these requests have not been implemented.

Now, the internal storage devices of these systems are pretty much not conducive to handling the higher datarates. The new universal flash storage chips, are a massive improvement, but still limited. The latest UFS 3.1 is still not good enough for holding and processing 8k, and I would imagine that it depends on the physical chip model used. UFS3.2 is probably due next year. This flash storage is also used for the phone's extensive hidden and unhidden, and app, internal processing. Video work using internal storage on a normally used phone, is probably a limited prospect before storage starts wearing out.

Now, here is the rub. More and more phones are not coming with an card slot, another great failing, and many cards are probably limited, or very expensive. This forces you into using internal storage that often does not have much storage for reasonable bit rates for 8k to get similar ratio of bits to pixels as a high end Bluray does for fullhd, and offloading the footage through USB is probably cumbersome even if not out and about.


A warning on flash. It is rubbish. Cheap Multi level AND flash is the worse, and even a year left alone can be bad, as it corrupts. The alternative OR based flash is more durable but not good ultimately. A form of vertical flash is a bit better aswell. The Intel micron 3D Xpoint storage alternative to flash is really needed. The cheap flash is down to 100 writes to a cell before corruption. So, using them with processing aswell, or apps, on an external card is an issue. Both those things could nuke storage locations in even a good card after sometime.


USB:

There must be some sort of software out there to backup full speed to an external flash drive, if you can't use a high speed microSD card and the devices system supports high speed. The situation is so bad, PC system component manufacturers could sell streamlined android devices, in the same way they stream line PC hardware.

Unfortunately there are still a lot of USB 2.0 devices out there, when USB 3.0 would be much better, and to write directly to USB 3.2 and the USB generation after that better again at dealing with the parrallel loads. The dump of footage from the drive would be quick. This is where 3D Xpoint would be used. It is presently the most reliable and cheap consumer storage technology to sustain high speed. However, there is also write once technology, but I don't know if any have the sustained datarates. Low cost write once is prefered, especially if it has archival retention times.


Wifi:

So, your only option might be to write over the fastest wifi connection available to an external system or drive. However, even the highest spec is prone to noise and interference, and maybe slow on the phone, even if streamlined. The highest spec available on phones tends to also not be lmuch.


Streamlining:

Not only the app to file path needs to be deterministic realtime like in a normal consumer electronic device, using a guaranteed maximum data rate and pre alloted storage. The same goes for normal phone functions which seize up when a system runs out of memory, so it can't even operate as well as an under $20 phone. When filming from internal memory, this is going happen a lot, enough so you won't even be able to off load your footage or make an emergency (911, 000 etc) phone call, without a lot of issues taking a while to resolve. I have been through this recently with an photos quirk which consumed my memory to a lot below 100MB (trick is to maybe restart to flush some memory usage, and drag and drop unused applications on screen to uninstall (the low memory pop up app removal, settings based function, keeps collapsing and you can't even long press them and press app info to uninstall there) then you could be able to start settings and do further operations, but at least turn off mobile and wifi data etc, if that is causing storage consumption. You also can't keep off loading hundreds of gigabytes of data to the cloud, every time this happens multiple times a week, that would cost. So android needs to implement credible solutions before that long shoot of yours costs you more than the phone to upload. In emergency situations, it is very dangerous. A simple solution is for them to have hardware based phone functionality, which might only require hundreds of kilobytes to work at full speed, with an auto processing allocation override function, even 0% storage as ram can be used to run and hold temporary data, and cache flushing of all data which is not user unsaved, like in messages drafts. A couple hundred kilobytes is pretty much less than one good photo, so you are unlikely to miss it much, unlike trying to get that last bit of a real time event in and the app stopping even though hundreds and hundreds of megabytes are left, causing you to loose a non repeatable spontaneous event, you know. The removal of user ability to dump all caches, or selectively, from the settings storage function has made things difficult, and storage itself is missing here and hidden, making it hard to find and get too).

The sensor to memory, CPU, GPU, video codec path has to be streamlined and deterministic with high resource consumption ability. The path between CPU, GPU, and video codec to memory and storage also needs to be streamlined. The path between CPU and GPU needs to be streamlined. The path between CPU and GPU to video codec needs to be stramlined. The path between all other processing resources (DSP etc) and external storage devices on card, USB and wifi needs to be streamlined also. It pretty much will make a great difference in usability.

GPU: A general purpose programmable GPU is needed, or programmable video codec processing, or robust video codec processing hardware.


--------------


Possibilities:

I've written most of the above over more than maybe 9 hours, mostly a few days ago, so hopefully I covered everything.


People have used cameras apps like FreeDCam to edit video profiles and enable very basic 8k video recording on some phones last year.

Vizzywig, used to burst shoot pictures at video rates in high quality on the older iPhones. A number of chipsets will pull frames off some sensors at higher resolutions at video rates. So, software to record that is possible.

Making a new high quality codec and streamlining software and drivers to move data as efficiently as possible through the system processing path to compression and storage. To use every available processing resource on the system in parrallel to compress higher quality video at higher resolution. It is possible to split codec chores between different systems in parrallel on the same frame, and to send different frames to different systems. A combination of these techniques is likely the most efficient. A frame, or group of frames, could be processed through the video codec at, say, upto 5x the datarate, another two sent through the cpu's and another two through the GPU, and some side processing through any DSP. They are then combined for storage. A high end system might enable even higher quality. In the same way, a Braw or cineform like codec could be attempted.


Preferred modes and codecs:

4:2:2 and 4:4:4 12-16 bit recording modes. Fullhd to 8k. Flat and log video profiles. Rec 2100 support. 160mb/s h266 to 320mb/s modes, double for h265, quadruple for h264.

Cineform Raw, and challenging invalid Bayer patent claims, due to prior use and obviousness, which stop Bayer video recording modes from being used on phones. Otherwise a user has to pay a $20 licensing fee to upgrade. Cineform otherwise, is free opensource and a leading codec, which is less processing intensive on CPU's. ProRes raw is another option.

Black Magic Braw codec.

With temporal optical flow tracking noise removal circuit. Cineform, Jpeg, cdng (in container only inorder too speed processing) or prores, configuration, for a Post Processing Jpeg/other mode, to produce an close to exact match for underlying filtered photosite colour, with other colour channels using no data since there is already approximation of missiing colour from surrounding colour. Producing a smaller file. In post, the underlying coloured filtered single sensor data pattern is extracted to be demosaiced into a gradeable image. In this mode, the image data can be enhanced for a final look. Meta data per frame to be able to set image levels at any frame going forwards from that frame, and as many frames as needed. Setting brightness, contrast, and colour adjustment curves, with data on sensor, camera and lens setup, type, brand model, year, batch, and any noise and image adjustment (if needed to be accommodated or reverse calculate somewhat) with custom luts. Once the data is set, it carries forwards and does not have to be recopied into further frames, minimising data usage. However, the look mode should be included in each frame. The mode tells if there is no mode and if there is meta data image adjustments, and if there are look adjustments to the data, and which frame the last adjustment was on is attached, and this frames number, as it consumes very little data. A file ID stamp mechanism per frame helps design a recovery mechanism. This is a subset of my file system recovery technique from around 30 years ago. It is a trace path mechanism, enabling a higher probability of meta data tracking recovery, and sequence recovery.

This footage can be processed to a better degree in post.



Streamlining:

As per the streamlining section above.


Wired offloading and cards:

Streamlined as above, with latest USB/Thunderbolt, with a card based on USB/thunderbolt than can use usb-c interface to get rid of the need to have a seperate card interface. Even one card above and one card below in the socket. External GPU use, inorder to sell a thin fashionable phone that can slot into a gaming rig with battery and more processing and cooling. The card slot would have horizontal groves on top and or bottom centre of the USB-C port in the sides of the port, to slot a card bigger than the ports in. Cards too small are too loosable. There is need of an name tag extending the card format in length, which can be used to quickly remove and see the card if dropped. The flexible tag can be wrapped and slotted into the case, or clipped off, or better, ports on the back of the case where the tag lies flat on the body, or even slotted to each side, with the finger able to touch the card and slide it out quickly. These are two slot standards for the same card standard. It is a shame that the sides of phones are so thin, you could fit the cards flush on the side otherwise, or making side slots on the port vertical, but structured different. To get over this, you could rotate the USB-C slot, but phones are too thin for this, however a work phone with more battery and cooling could accommodate this. But, one thing is certain, a high-end work phone is needed to be rugged, big battery, and use some sort of alternative card slotting. My own designs go past this to solve these problems. Big investors welcome. Part of this is to move to an optical (which can be incorporated into usb-c port) or magnetic interface, which can be made water proof, to replace USB-C. A wifi near feild version can also be used. The best day of SD is nearing the end. I go past what is said here in my designs. The sloting on the back, or into the side using tags, can leave part of the the tag sticking out to be grabbed them removed a card. My originally tagged design had it so the tag could bend down under gravity. Double or triple flexible zones allows it to bend over ths back of the phone from the USB-C port easily.

My USB port proposals are also to have an extended USB slot specification to make an high bandwidth IO card interface, where the standard slot is joined to another slot directly next to it by a slot through the port to the next one, with the rest of the port holding standard devices, which easily allows a 16x to 32x+ PCIe passthrough card wired interface, to be made from a series of ports. The inter free spacing between neighbouring socket plug insertion spaces, could also have an divided extension of the socket central board. A lot of this was PC/laptop orientated, but still useful somewhat for phones. Part of this was to use all available pins for data and dynamically allocate internal PCIe lines to each port based on data usage needed by each device plugged in, even dynamically over time in a more advanced version. So, a mouse or keyboard etc gets one line, leaving more ports that can use all lines, or in this case, more cards. A minor upgrade part that is to use dynamic streaming share one or two internal lines between all low speed devices, leaving even more dedicated lines for high speed devices. In the end, such interfaces could be used on PC mainboards to replace PCIe slots (with reinforcement grabbers/slotters or posts, and flexible mainboard attachments, which could be replaceable, to prevent the interface on mainboards from being wrecked) and allow lie down laptop gpu cards on compact PC's and laptops. A lightening like plug interface would have been better. However, I've been working on an idea to replace the interface socket and plug to something waterproof and rugged. Used on the mainboard the extended USB-c cojoined socket scheme will mean an on board slot which is hard to be wrecked, and make CPU and memory attachment easy and durable. Frankly, CPU cards on a backplane are probably better again for high powered computing, along with chip set (another thing I was working on). Frankly the new design of the MacPro is something I rejected before, recommending an open modular system which initially they were probably following and then scrapped. The system you could have a server farm on your desk. The present system is a compromise version of the proposal, but it can't scale like the proposal. My original design ideas (not what I give out to Apple) could fit 200 PC's on a desktop. There is actually little to prevent it from continuing to be expanded to some physical limit out there (you probably could expand to tens of thousands of PC on a wall, and work with the thermal energy recycling proposal I was working on much better, but people have started doing that sort of thing in recent decades since. Thinking about it, thehe Mac Pro is like my old backplane proposal, which I might have published where they were years ago (I am in places that have people either in the Apple process or close to Apple, and I contribute things to discussions I know are being read, which long term discussions spiked the MacPro redevelopment interest in Apple. I have been doing this for decades with various company, and writing in design suggestions directly since maybe 13 or 16 decades longer. In the early days, you could even communicate with company sensors directly, until the suites moved in, in the late 1980's, early 1990's. But it's pretty simple, young businesses in new sectors tend to have informal structures and it's easier to get to talk to people high up when the non development business workload load is smaller. We are curious people is developers. If somebody out there is developing some futuristic tech in his garage or small business, and your not wasting their time, and preferably really skilled, sensible, with something to contribute, you could talk to them. Development tends to also have escentric nutters who also are very skilled with a high level of potential contribution, but not as sensible, and harder to listen too. But normal nutters, including dismissive personality disorders, yet convinced they are like an encyclopaedia, who are the shallow/dry wells of understanding and existence as far as conversation goes, even though seemingly sensible and solid, should listen rather than bother communicating). But again, the other proposal could have been industry dominating, allowing a user to start with a MacMini replacement. Essentially it was an independently stacked cooled module form of the system subsequently used in the MacPro, with no continuous backplane or central processing or rigid frame constriction on expansion and flexible, with variable sized modules capable of taking threadripper or i9 mainboard modules. Essentially like a server farm, but close proximity and high speed with quick and easy setup. But Apple likes nice shinny tightly structured things. A lot of my stuff turned out mainframe and server farm like without me knowing.

I'm also disappointed modular phone is no longer supported. My modular phone interest is and was based on minimal practical useful unit size, and had advanced new formats planned. So, processing, camera, optional GPU and wireless, and modules maybe all that is required, with an external interface module is another possibility for more generic phone, with more stylish phones having the interfaces built on the mainboard or in the case (I probably shouldn't have said that, but a bit more in addition to what I said). But, also for quick construction of other new electronic devices with minimal engineering and official approval processes. We are talking about phone repairs and upgrades in seconds, and complete nuked processing storage module exchange repair in minutes while preserving user data. Significant savings in warranty repair, and able to subcontract out to local repair firms again. For warranty repair being part cost plus $10 or so dollars on the spot. Better than mailing phones into a central repair site. However, this is very much a PC or professional users phone, or for normal tablets, that PC companies might like to push in their product lines. Hundreds of millions potential sales a year. I can do a dynamic port structure encasing too.

Now, there needs to be high end phones with two or more ports. One for charging in use, and one for other things, even if both use the same internal chipset lines, or setup through an external hub in the ports functionality, so only one set of lines is connected, and whichever port is being used by itself, gets all bandwidth. So, plugging a charger into one port, means the other port has all bandwidth. Frankly, the ports could be extended to have hundreds of gigabits per port, which makes it pretty good on a single port. I'm not looking at 32k video on a phone, but realistically 12k-24k 180 degree feild of view on phones is possible, and 32k is not that much more, besides 32k makes great 8k-16k delivery. Holographic filming requires 96k for 8k delivery. But, I'm in favour of 16k for 8k computational photography, which normally desires 24k for 8k instead. The extra resolution compresses away rather well, meaning the files are not too much bigger, if using good noise removal. But it means 180 degree 12k is 24k, and 24k is 48k, so high speed off loading on work phones is still important. To hit those resolution targets, we are talking about bigger sensors here, and with noise removal and single exposure high latitude tricks. Presently, bigger than 1 inch chips and O.7 micron technology is announced. Frankly, I am not interested in such resolutions yet, because it is hard to process them on a phone yet. When magnetic computing comes out, full computational photography on phone at these resolutions should be possible. I know some people who are leaders in low powered processing designs which should be able to get into that realm. It would be so cutting edge, it would be hard to say how much of that resolution and workflow performance it could get. My own silicon processing designs probably wouldn't do much better, as it depends on energy envelope and they would just tailor down their design to reduce power, in parrallel design with more silicon. To over match, I would have to develop my new calculation technology, and until I get down to it and testing (that is, it could be custom silicon chip runs, which could be tens of millions. I would be be better off developing it with a image sensor company who have their own equipment. That's an idea I should contact somebody I know who is having problems in this area with cost of foundary access) it's hard to tell if it would be as fast or 10x faster. I've forgotten that, but a real potential break through.

A side note: This write-up is about what can be done, now and in the future, so please don't be upset if the discussion here is beyond what we presently use, which constantly changes anyway. I am looking at the hard technologies being developed as likely candidates for main stream use, the extension of design paths and directions out there which products have available to move towards, and technologies and designs based on these. It's like nearly any device, it is often a complex, often difficult, design process of many elements and design considerations, before you get a simple product in your hands. Look up the design of a simple thing like a top end performance hollow ping pong ball, and it's likely to have some complexcity to it. Look up a high performance golf ball, it will have complex multi element design to it, and to the performance of each material in it, though the product looks like a simple thing.

Twin hot swappable cards, so you can keep recording while you replace a card which is filling up. So, apart from above, you could have two SD or Micro-SD cards slotted into the side of a work phone.

A note: A USB/Thunderbolt port based on the latest PCIe standard would cover 160Gb/s throughput, and 320Gb/s if based on next year's PCIe 6.0 version. The latest USB/Thunderbolt remains stuck on PCIe 3.0 at 40Gb/s. I imagine that there might be an issue running the higher speeds over cables, or over longer lengths a cable requires. PCIe now has an external interface that converts PCIe to an optical fibre, which already is specced over 120mb/s I think. Thunderbolt has optical cable available too. So, I wonder if a future 320Gb/s Thunderbolt optical will come out? This might meet the needs of a lot more of the remaining workloads needing a multiple segmented port on a phone, but would not even cover a 24k feed. But increasing the lines in the USB-C port would reach even 32k. But, 96k holographic would require 9 times more, and 16 bits double that, high speed up to 4 times more again, is over something like 43 terabits per second, which incidentally, is just above the capacity of a recent working optical wide area network (phone network) technology an Australian University is showing, which a silicon laser version of could be designed. However, that is an amount suitable for a 8k VR like experience, but 24k+ is 180 degree plus (humans can see beyond an 180 degree arc) which is 9 times more again, plus 360 degree VR dome view is another two, plus for professional filming we generally like double resolution which is times 4 when incorporating both axises, and 32 bit precision should be enough to cover things. So, 144 x approximately 43 terabits=6.192 terabits. But it doesn't stop there. If you were free moving in such a dome you could focus 4 to say 8 times closer to the dome wall (I had to do these sorts of calculations the decade before last to find out how much data needed to be handled in VR) which is 16-64x = 99.072 to 396.288 petabits a second. My calculations might be off. Even with magnetic computing that is not possible. A quantum computing mesh, which I'm trying to figure out, sure, if you can get the mesh to work properly at small enough scales. But it looks like my original 3D proposals, which I was doing the calculations for, is the only current option.

Redoing the calculations, x144 = 11520 Gb/s÷1000-4000x compression= 11.52-2.88 Gb/s,

x2 VR=23.040 Tb/s
÷1000-4000x compression = 23.04-5.76 Gb/s

4-8 times magnification = 368.64 Tb/s to 1.47456 Pb/s
÷1000-4000x compression=

4x magnification
=368.64-92.16 Gb/s
8xmagnification
=1.47456 Tb/s-368.64 Gb/s

x4 image computational views = 46.08 Tb/s
÷ 1000-4000x compression= 46.08-11.52 Gbits/s

4-8 times magnification = 737.28 Tb/s to 2.94912 Pb/s
÷ 1000-4000x compression =

4x magnification
=737.28-184.32 Gb/s
8x magnification
=2.94912 Tb/s-737.28 Gb/s

x9 image computational views = 103.68 Tb/s
÷ 1000-4000x compression =
103.68 Gb/s

4-8 times magnification =
1.658880 to 6.63552 Pb/s
÷ 1000-4000x compression =

4x magnification
=1.65888 Tb/s-414.72 Gb/s
8x magnification
=6.63552 Tb/s-1.65888 Tb/s

However, those figures are errantly in 4:2:0, and needs to be doubled to get 4:4:4, and maybe doubled again for a high quality 6 colour format, and higher again for something completely realistic, to get to a view like level equivalent to the so called holodeck (for matrix you drop the magnification, and most of the angle of the 360 view, as the view is calculated relevant to the eye).

So, you can see there is still need for a multiple segmented bus for a while too come, with a deeply compressed filmed version even saturating a multi terabit/s connection.


Wireless offloading:

Highest spec wifi on phones tends to be not so fast. What is needed is a high speed off load that is more wifi direct like, using wifi. 5G/6G beam forming etc. Laser tends to be hard to keep connection in a dynamic environment, but buffering is possible. A better easier auto setup version of wifi direct would be much better then Bluetooth. A one to many model using USB and Bluetooth device data interfaces, with tracking beam forming, would be preferable. That can be done with the related 5G/6G with hardware only beam forming and tracking control, so it can't be hacked and pointed at the user. If only there was a phone related company which could champion and develop these over the next year (most of this is simple changes to existing developed spec, and reuse of existing spec) to be implemented in next year's chips, and the tracking beam forming can be a initial proposed spec before being worked out over coming years?


Well, around 15 hours writing and I've forgotten something.


Battery technology affecting phone performance:

Move to nanowire, or other, batteries on work phones. Promised for early last decade. The promise is for many times the battery capacity, fast charging and very long life, allowing higher performance devices, small batteries and smaller phones.



Custom loadable plugins for the system:

- Custom codec.

- Custom alternative file handler and all other parts involved in processing and streamlining above, but only for the data of the app using it, and granted user data?



Customisable camera system - With custom, loadable and editable in configuration editors:

- Systems as above.
- User graphical control interface.
- On phone controls.
- Configuration settings.
- Dynamic configurable control and lens adjustments over time.
- Dynamic picture processing and adjustment.
- Plugin Codec
- Plugin Drivers
- Plugin calculable processing functions and structure.
- Pictue processing and configuration file.
- Support for loadable tree structure l code/plugins image.

- Audio versions of all the above too.

- External physical and wireless customisable control system using device configurations and device control interfaces for: Usb, Bluetooth, Wifi, Ethernet, NFC, magnetic using each interface as a bridge for the other interface device configuration and control interfaces over the link. So, a custom device can be made using the chips for any device and relayed through another interface. This should be reliable enough for live control applications. Most interfaces lack a complete range of device support. As there is no wireless USB, there is an issue in wireless support. However, here manufactures can develop custom control devices for the camera.

- Interfaces for all these items above, and configuration development software app allowing users to develop them.

- Direct video memory, GPU, sensor, and main memory access and allocation for app and developers, pertaining to only areas the app has permission for and also currently has access control context for. To speed performance.

- Base is minimum Linux machine and machine code coded for performance, to be usable for both Android, and on any Linux machine. Proposal: Higher non performance/critical code could be written in JavaScript as a minimum environment alternative to Linux, for application side.

- A seperate video editor, composting, effects and grading package with plugins and configurable, and supporting macros. It can start small and be expanded by plugins dynamically, to save storage space. With app able to download pre-approved registered plugins as needed, which the user has selected to use but marked as store online. Online plugins can cache on phone, to allow large plugins to use less storage space.

I was thinking of funding an adjustment of the camera2 API and app like this, through third world micro contracting, making it extremely affordable, but I'm micro trembling like crazy tonight meaning the brain condition is playing up, which is a bad sign. Plus, I've been going through a severe time with some other new health related stuff here this year, worthy of its own documentary. So, I don't think I can afford to program or even manage the project if I could only layout the API's, functionality (enough for somebody to construct a functional description) and data set structure, it would be ok, but this now requires somebody else with a team to do. I'm best at that stuff, but the deterioration requires high level health to get there. I basically don't have time to research brain repair, which is one of those impossible things which may yet be possible, as usual, one day. The related research has probably already started coming through, though it will take up to decades to come through locally.


---------------------


So, what did I find:

There are a number of gaming phones with high performance cooling systems and likely good GPU, and close to the metal gpu control systems, if only.snu of them would take 8k video camera performance seriously. One of which I contacted last year. However, most are a hodgepodge of compromises anyway.

With all these phones, I am not saying any of them are suitable or good, they are just testers.



Honorary mention:

The Lenovo Legion phone is pretty good for performance features, and could be a worthy phone for 8k camera use, except it doesn't have it, and has other issues.



An interesting phone:

The Nubia (ZTE) Redmagic 5G and 5s. Both these phones only have a useless, 8k 15fps. The previous three series model were to have 8k 30 fps but didn't. But who knows, maybe it is possible to get it to do 8K at 30fps with an 3rd party camera app. FreeDcam's video profile editor has been used that way before.

The phone has nifty cooling and boost mode. No card, but two USB ports, but one is an accessory port and the accessory has the USB. It has some sort of streamlined file system handling etc, and an editable game mode, which I would like to see if I could run an camera app in. There were to be "professional" level video recording, but that hadn't appeared as fe as I know. So, closer than any I have seen, but only USB 3.0.



Others:

The LG v60 ThinQ dual phone. Unfortunately, this is the end of this series of phones. I have seen a less than flattering comment on picture quality which is similar to comments about other phones using the same 64mp Sony sensor, but you can tell until somebody gets it and really tries to get a good result out of it, with third party camera apps that try to bypass the issues and present a better more authentic picture, trying to boost the normally hopeless data rate and play with the camera configuration editor as with virtually all these phones when using the video camera. It's used a ufs 2.1 rated internal flash storage, but most other things should be good. It has audio recording in the headphone jack, and from memory that was nice of the previous model.

Samsung has a range of s20's but so expensive, I have not tried to work out if any are suitable. I don't think they have retained the previous data rates even. At that price, you are getting into pocket 4k/6k territory, and might be better waiting for a pocket 8k even.



Coming up:

The Nokia Pureview 9.3 is coming. No fancy camera setup, but who knows what the video camera functionality is going be like. We are running out of options for something descent.


Google Pixel 5 5a etc. Not much is known.



After just a phone with nice camera function:

Have a look at the best Xiaomi flagship they have, or Huawei flagships.


Well, that was over 34 hours to write.
aIf you are not truthfully progressive, maybe you shouldn't say anything
bTruthful side topics in-line with or related to, the discussion accepted
cOften people deceive themselves so much they do not understand, even when the truth is explained to them
Offline

Wayne Steven

  • Posts: 3362
  • Joined: Thu Aug 01, 2013 3:58 am
  • Location: Earth
  • Warnings: 1

Re: 8k Mobile reflections.

PostTue Aug 11, 2020 11:32 am

I contacted Nubia last year about doing better video modes. They later announced a better "Pro" video mode was coming, but that seemed to disappear
But I get about, promoting consumer acceptable video modes, so that PR people can pick it up and relay the ideas to management (works reasonably well, and used it many times over the last two decades to advocate a number of large changes in the industry).

So, Samsung has announced their version of "PRO" video, people shooting music videos etc, with the phones, using AI and controlled Zoom to get more professional video. So, a sort of victory I suppose? However, we will have to wait to see how much more control and quality it offers. They haven't mentioned 8k 25 or 30, or 50 or 60 fps, 12 bit+ 4:2:2+. Turning all AI off, except in relation to noise reduction, image and image levels reconstruction you can do with authenticity, rather than the misaligned enhancements you see on phones. It is a phone, but you need to finely balanced the outcome for your family footage. Unfortunately, with the wrecking of 4:2:0, it's not really suitable for family and sports events either. How can an industry be so petty to produce such a problem.

Now, the reality, as I've discussed in various places is this. Due to the nature of wall sized TV leading to viewing in 60-120 degree feild of view viewing, 12-16 bit 4:2:2/4:4:4 8k high quality phone video camera modes are needed in a market of super premium TV's. All the normal video camera footage from phones we have already shot, will look not so good, to horrible on such screens. We could expect a large market for 120 inch plus wall units in 5 years, even on the lower end. Your family wedding video shot in current phones, will start to look iffy, even at 4k or 8k. At 120 degree, 8k will look like 4k, or fullhd at 30 degree people think is the old normal. But, 8k At the moment is strangled to death. But, the marketable property is, what will your footage look like tomorrow? Young people relate to the immersiveness of it all. You could literally sell them 12k+ 180 degree curved screen wedding videos, events and sports setups. I've literally been looking at doing schemes which act like a 180 degree plus dome, how do you display and film for that (mentioned elsewhere). If you don't offer something better your sales stall and drop off, so, continued growth of the company, is in something authentically better. Looking at what the future can be like, and different authentic stratification of the market by affordability of the technologies. For instance, to implement a 180 degree dome like 3D experience in a large space, can be a lot of money fir the space, so it is for the rich or commercial purposes. To do a 180 degree horizontal view for a family, requires a dedicated theatre like room, maybe even with a door built into the side of the display. That's a good job money. But, a gaming room can be done for one or two people, even a pod. More normal people can afford that. I've realised yesterday how to do an glasses free Holographic display. It's doable, NOW!

I could give a presentation on how to use 16k or more, and how to market that. If BM produces a 16k fullframe camera, Samsung could immediately contact BM to film scenes with the prototype prerelease for a 16k version of their microled wall TV. At the same time, Grant could ask them about using their dual gain sensor technology. Samsung currently is trying to challenge the Sony dominance of the sensor market. They have advanced technology, and buy more, and production facilities. So, they should be open to new business opportunities to supply sensors superior to those currently used. As a business person, I would personally then suggest exploring a partnership to offer services to develop pro camera software, codec and setup, and look at joint sensor development. They then label and market it as an extra quality feature developed by us, with our brand on it, in much the same way as the lens companies did last decade. The advantage for BM, is a reliable supply of sensors cheaply. Once you push past the Alexa, you get to the point where you only need to upgrade sensors on the upper end models. The base sensor becomes good enough for the next 10 years.

Now, how affordable is a 16k 32:9 screen. I have a wall I could mount it on here, in an small American sized house. Even if you have to put an openable door in the display (not the only innovations I'm looking at. I also came up with a design proposal for a large earth quake resitant mounting system, for places like California, that can reset the panels after the quake. But so don't know where the plans are for that). So, middle class people could afford this, or keen individuals.

So, there is a long way to go.

aIf you are not truthfully progressive, maybe you shouldn't say anything
bTruthful side topics in-line with or related to, the discussion accepted
cOften people deceive themselves so much they do not understand, even when the truth is explained to them

Return to Off-Topic

Who is online

Users browsing this forum: Yacy [Bot] and 3 guests