I2S microphones are tiny SMD parts which are meant to sit somewhere on the pcb of a multimedia / audio device and, if enabled, continuously record audio samples with their internal ADC and shove the resulting data stream to a MCU or CODEC. The microphone designers decided it would be wise to use a very streamlined set of data lines and do only include I2S and
NO other means of controlling any sample settings beyond the primitive ENABLE and LEFT/RIGHT pins and the sampling frequency, which is controllable via I2S clocks.
One therefore has
no control over any
gain or sample depth setting of the ADC stage of an I2S microphone. This fact might have also led to - but I'm not confident about this fact - the follow-up decision by the audio designers to use fixed 24bit depth for the I2S data transfer. This is the crucial point why I've decided that cutting bits most definitely has some repercussions along the way.
I'ld be happy if this assumption is wrong, because then things get a lot easier on all fronts
Even after a lot of reading on this subject, I'm not exactly sure if 24 bits are really necessary, and if so, why.
The I2S mic in question* has 91dB "digital range" between its noisefloor and +120dB SPL...