It would be very good selling argument if the second series has better gain, lower noise floor, but based on the technical data this is not the case, they both have same equivalent input noise, this is 10 II

- 10 II.png (57.55 KiB) Viewed 4271 times
and 10T.

- 10T.png (59.96 KiB) Viewed 4271 times
A bit surprising they both have 32 bit ADC. Maybe they are utilising just 24 bit of that on 10T ?
PS. It would be nice if someone would actually explain this data with understandable terms, without any sales speech and dual 32 bit ADC floating +-700 db Hambo Mambo.
As I would imagine it works is,
- in second series they have 2 ADC per channel, similar as the single one on first series.
- one of them is working on lower gain and the other on higher gain, it would be nice to know these gains.
- If the higher gain one clips they automatically use the data from the lower gain ADC.
- On second series the data from the two ADC is converted to 32 bit and for some reason to float, maybe to avoid the data rounding errors?
Basically as I understand the noise floor remains just the same, but the low level signals have much better resolution. The only advantage really is this, better resolution on low level signals, this makes it possible to bring back all the details even when the gain is set so low that the second ADC will never clip.
The disadvantage of the 2 ADC would be, that ideally it works great, but on real life there sure is problems matching and combining the signal perfectly form two ADC to one 32 bit value. The temperature changes, ageing of the device etc. could cause problems. This is probably the area where the innovations are on this. The lack of explanations on this area is the reason I decided to go with first generation device for now.
With the fist generation device it should be possible to get just the same level of details at low signal levels. It is just that then there is risk that high input levels would cause clipping. The gain needs to be adjusted correctly, as there is not the second ADC automatically backing up.
The 120 db vs. 142 db dynamic range comes from as on the second generation device it is possible to have high signal levels and low level details at the same time*1. On first generation device it is possible to have both just as good, but not at the same time. With first generation device one needs to select high levels or low level details, by adjusting the gain.
It probably would be good idea to adjust the gain also on second generation device, so that there is no constant switching between ADC1 and ADC2. i.e. that the recording would happen with ADC1, except if the sound unexpectedly gets very loud. I do not know, maybe the switching between ADC1 and ADC2 is so good that it does not affect the sound in any way in any conditions, that would be awesome.
*1 This is not exactly true if it works as I assume, i.e. that a single 32 bit sample can have only data form one ADC, i.e if the low gain ACD is clipping the data is only form High gain ADC and if the low gain ACD is not clipping the data is only from it. The same goes for the 142 db dynamic range, second generation single sample has only 120 db dynamic range. However as the following samples can have different gains, in practise I agree with their marketing. Note though the dynamic ranges are given at 10 db gain. At higher gain I would assume the dynamic range difference be less (gen. 1 vs. II)
This is just how I am trying to understand it, please correct as I really would like to understand it, as it is evident from how many times I edited this after I started thinking about it.