I reviewed the handling of 24 bit I2S microphones (which I use to capture ambient noise) and believe the automatic decimation to 16 bits as coded will work fine. I ran into another mental snag while doing system analysis.
I also have an array microphone with I2S support. The issue is that it want's to be the I2S master as well as appearing as a 24 bit I2S source (like the above microphones).
As a note, the only input/output devices I'm implementing will be the I2S slave in and out to the array mic. An I2S master in R/L for the 2 I2S microphones, A DAC audio out and a ADC audio in (eg a pseudo codec). AGC and VOX are implemented as intermediate audio dsp processes. That's it.
I need to know if I have to do anything special to let that I2S channel with audio decimated to 16 bits AND let it be the sole framing master all of the rest of the audio I/O.
I also have an array microphone with I2S support. The issue is that it want's to be the I2S master as well as appearing as a 24 bit I2S source (like the above microphones).
As a note, the only input/output devices I'm implementing will be the I2S slave in and out to the array mic. An I2S master in R/L for the 2 I2S microphones, A DAC audio out and a ADC audio in (eg a pseudo codec). AGC and VOX are implemented as intermediate audio dsp processes. That's it.
I need to know if I have to do anything special to let that I2S channel with audio decimated to 16 bits AND let it be the sole framing master all of the rest of the audio I/O.