Low level I2S?

wareya

New member
I made an equalizer with a Teensy 4.1 and the Audio Shield, but I ran into problems with the Teensy Audio Library because I can't change the buffer size or frequency without things breaking. I want to interface with the ADC/DAC directly instead of going through the audio library, but I can't find any documentation on how to do so, or even how to do low-level I2S at all.

The output_i2s.h etc. headers all depend on the audio library, and expose hardware control as an abstraction on top of AudioStream. Looking at output_i2h.cpp, it seems to be doing a lot of hardware-specific work, rather than just implementing the I2S protocol. Which makes sense, given that it's a very high-level abstraction. But it means that it's difficult for me to use it as guidance on where to start in using I2S at a low level.

After configuring the codecs, I want to manually get a block of sample data from the hardware input, process it arbitrarily, and then manually write it back to the hardware output, in a loop. I don't want to patch streams together etc.

Where do I go for resources on this?
 
If your only goal is to change the sample frequency and buffer size, many people have done that successfully. It's been discussed here many times. Maybe with some search you could find those threads?

If you want to craft your own program which processes audio in block but not using AudioStream, you've already found the main resource, which of course is the existing code. The other resource would be the reference manual, which is really only helpful if you want to dive into the hardware register details.

After configuring the codecs, I want to manually get a block of sample data from the hardware input, process it arbitrarily, and then manually write it back to the hardware output, in a loop. I don't want to patch streams together etc.

I don't quite understand what you mean by "manually", but maybe some explanation about how the I2S hardware works might help?

I2S is a continuous streaming protocol. It's not like I2C or SPI where communication happens in a burst with the time to act under your program's direct control. With I2S, receives and/or transmits at the audio sample rate. So your program needs to always take the received data at the rate it comes, and always have data ready for the transmitter to use.

Normally this is done with DMA, where the incoming data is written directly into buffers by the DMA hardware, and buffers of data you've already prepared are automatically copied into the transmitter as it needs more audio data to send. The DMA controller gives an interrupt when it has completed half of the buffer. The interrupt code must take care of the half of the buffer which was just used up when the DMA and I2S continue using the other half.

If you want to do this yourself, you would probably create your own ISR code. Look for "dma.attachInterrupt(isr)" inside the audio library. This is where you would cause the interrupt to run your function rather than the normal ISR code which interacts with AudioStream buffers. You'll probably want to start with the known-good code, since there are some thorny details like flushing the CPU cache.

What you'll really gain from all this effort is diffuclt to say. The existing system is quite efficient, so it's hard to imagine you'll gain much practical benefit. Maybe the experience of just learning to do it will be worthwhile?
 
If your only goal is to change the sample frequency and buffer size, many people have done that successfully. It's been discussed here many times. Maybe with some search you could find those threads?
I've poked around on here for a while and given it a shot, but whenever I change the relevant defines, the audio I'm passing through gets distorted, even if I try doing other stuff people have said made it work.
If you want to do this yourself, you would probably create your own ISR code. Look for "dma.attachInterrupt(isr)" inside the audio library. This is where you would cause the interrupt to run your function rather than the normal ISR code which interacts with AudioStream buffers. You'll probably want to start with the known-good code, since there are some thorny details like flushing the CPU cache.

What you'll really gain from all this effort is diffuclt to say. The existing system is quite efficient, so it's hard to imagine you'll gain much practical benefit. Maybe the experience of just learning to do it will be worthwhile?
I think this is what I need to look at, thank you! Yes, this is partly a learning thing for me; I could just live with the 3~6ms added latency I currently have, but I want to try to get it lower, even if it means setting up something this complicated.
 
but whenever I change the relevant defines, the audio I'm passing through gets distorted

We can help you more if you show what you've tried that didn't work. There's really 2 levels, the first where we can see, and the 2nd where enough info is given to actually reproduce the problem. But not even being able to see, best I can tell you is this sort of thing is possible and other people have made it work. Especially changing the block size involves just editing 1 define. Most of the audio library will work with block size down to only 16 samples, though some features like SD and FFT have hard-coded dependency on block size. Again, better help is possible when we can at least see what you're doing, and especially when anyone can reproduce the problem.
 
My code is here: https://github.com/wareya/peacemade_eq/ - it runs on an otherwise-unmodified Teensy 4.1 attached to a Teensy Audio Shield (rev D2 I think?)

The audio library classes that I use are AudioInputUSB, AudioInputI2S, AudioAmplifier, AudioOutputI2S, and AudioOutputUSB. I don't think any of these are the SD or FFT features. The two input classes are turned on or off depending on a runtime setting (so exactly one is active at a time), but everything always outputs to both I2S and USB. I also have a custom AudioCustomBiquad class, which does use AUDIO_BLOCK_SAMPLES.

When I define AUDIO_BLOCK_SAMPLES as 64, I get a square wave sound in one audio channel and distortion on the audio that gets passed through. If I record the audio being sent over USB output in Audacity, it sounds the same as when I listen to it over headphones (so, the audio that was being sent over the I2S output). I've attached that recording.

The problem doesn't appear when I direct the input immediately into the output, nor when I dummy out most of my EQ filtering code in
BiquadData's `apply` function. But if the problem is in my code specifically, I can't find it. I don't use any magic numbers or anything. Maybe I have some weird UB going on? Or maybe something somewhere is accidentally thrashing the internal state of my filters between buffer updates?
 

Attachments

  • teensy recording.zip
    1.3 MB · Views: 94
Thanks! I've solved it, so I'll note the two things that kept me from figuring this out until now:

1) back when I originally wrote this, I was purely using USB audio; when I changed the relevant define in the header, it broke, since the USB audio connections don't support other block sizes

2) when I tried different block sizes again recently, I was using analog audio, but I forgot that the header had to be edited directly and only redefined it in my project's main .ino file, which didn't work because the audio library's .c code never sees that redefinition
 
Back
Top