@whollender,
@Paul:
I'll try to go a bit more into detail about my reasons, as there seems to be some misunderstandings.
I'm not an audio expert by trade, so some of my understanding of issues along the way may be flawed or rely too much on "the engineers knew why they were doing it". If you have a deeper understanding, please feel free to correct me.
This is not about going from "high" to "highest" in terms of audio quality, as Paul has suggested (and criticized, for various reasons, which are understandable), but a matter of getting a whole line of audio elements to work that are otherwise plainly unusable with the teensy platform.
Let me introduce some functional concepts about the I2S microphones I've been referring to:
I2S microphones are tiny SMD parts which are meant to sit somewhere on the pcb of a multimedia / audio device and, if enabled, continuously record audio samples with their internal ADC and shove the resulting data stream to a MCU or CODEC. The microphone designers decided it would be wise to use a very streamlined set of data lines and do only include I2S and
NO other means of controlling any sample settings beyond the primitive ENABLE and LEFT/RIGHT pins and the sampling frequency, which is controllable via I2S clocks.
One therefore has
no control over any
gain or sample depth setting of the ADC stage of an I2S microphone. This fact might have also led to - but I'm not confident about this fact - the follow-up decision by the audio designers to use fixed 24bit depth for the I2S data transfer. This is the crucial point why I've decided that cutting bits most definitely has some repercussions along the way.
I'ld be happy if this assumption is wrong, because then things get a lot easier on all fronts
Even after a lot of reading on this subject, I'm not exactly sure if 24 bits are really necessary, and if so, why.
The I2S mic in question* has 91dB "digital range" between its noisefloor and +120dB SPL, which is it's full scale digital output. That is obviously below the theroretical 96dB that 16bit gives us.
For me, there could be several reasons:
- 96dB is only valid for a perfect ADC systems, which in reality might have e.g. worse quantization errors, so going for 24 bit saves quality in the end
- because of the small gap 91dB <> 96dB, the designers decided to move to 24bit now instead of later when it's absolutely necessary. (There are already microphones on the market with +130dB SPL full scale input value, which brings us to 99dB digital range from noisefloor to max. range point, and consequently one would decrease the SNR in that application by using 16 bits!)
- ? is it useful to set the digital "no input level" below the real noisefloor of the device?
- something other, financially motivated / marketing idea ("look, 24 bit! High quality!")
There is the big aspect of "postproduction": like with "raw" format for digital photos, depths of 20 and 24 bits for audio allow manipulation of the data without the necessity to "clip" especially high/low values. While displays and audio systems aren't able to use the additional depth in the final reproductions, signal degradation can be avoided in between.
In my opinion, that may be a partial answer in itself to Paul why >16bit would be useful for the DSP system on the teensy, but someone with more DSP / audio design experience is necessary to answer that question.
A negative aspect of 24bit I2S in practice:
Because 24 bit I2S data is riding in a 32bit subframe instead of 16bits in a 16 bit subframe, the overall clocks have to be increased noticably, which makes routing and layouts harder (BLCK / MCLK).
Again: I guess
someone thought it's worth the effort, I just don't know if he was right
(The same applies for the necessary bandwidth, if no compression / conversion is done)
Can you think of additional reasons why the audio designers went with 24 bit?
*(
ICS-43432 by InvenSense, who bought AnalogDevices' MEMS microphone branch in 2013)