Kuba0040
Well-known member
Hello,
This is something I’ve been wondering recently. I am coding from scratch my own software synth on the Teensy 4.0. Why don’t I use the Audio library? – I need something that I can easily add my own modules to and that runs faster. During the development I noticed something.
Observations:
So far, in my implementation each module, (e.g. Filters, mixers), process audio one sample at a time. So, there’s an interrupt at 44.1Khz, and each time it fires all the modules perform all their processing on a single audio sample and then send it to the DAC. But you see, this isn’t how digital synthesis is usually done. The Teensy Audio Library, Mitxela’s MIDI FM synth cable, etc. process data in chunks of for example 200 samples. Why is that? It didn’t seem to make any sense. After all, this approach is way harder to program. I thought that maybe some audio effects are just much easier to accomplish when you can look at more than one past sample, but I couldn’t think of any.
Experiments:
In my tests I’ve noticed that entering and leaving functions takes a lot of CPU time, like 4 cycles. Which makes sense as the CPU must push and pull data to and off the stack. What if audio is processed in large chunks so that we can minimize the time we waste entering and leaving functions by staying in one for a long amount of time. Is that the reason? The only software synth I know that processes it’s audio one sample at a time is the Mozzi library. But it’s written for the AVR architecture where there’s little to no memory so chunks would be expensive and no DMA. I thought that the DMA could have a role in this as well, but it mainly deals in IO, so that’s unlikely.
My question is, is there really a big performance benefit in processing data in chunks, and is the DMA involved in moving the data around, besides just the final output to the DAC?
Thank You for the help.
This is something I’ve been wondering recently. I am coding from scratch my own software synth on the Teensy 4.0. Why don’t I use the Audio library? – I need something that I can easily add my own modules to and that runs faster. During the development I noticed something.
Observations:
So far, in my implementation each module, (e.g. Filters, mixers), process audio one sample at a time. So, there’s an interrupt at 44.1Khz, and each time it fires all the modules perform all their processing on a single audio sample and then send it to the DAC. But you see, this isn’t how digital synthesis is usually done. The Teensy Audio Library, Mitxela’s MIDI FM synth cable, etc. process data in chunks of for example 200 samples. Why is that? It didn’t seem to make any sense. After all, this approach is way harder to program. I thought that maybe some audio effects are just much easier to accomplish when you can look at more than one past sample, but I couldn’t think of any.
Experiments:
In my tests I’ve noticed that entering and leaving functions takes a lot of CPU time, like 4 cycles. Which makes sense as the CPU must push and pull data to and off the stack. What if audio is processed in large chunks so that we can minimize the time we waste entering and leaving functions by staying in one for a long amount of time. Is that the reason? The only software synth I know that processes it’s audio one sample at a time is the Mozzi library. But it’s written for the AVR architecture where there’s little to no memory so chunks would be expensive and no DMA. I thought that the DMA could have a role in this as well, but it mainly deals in IO, so that’s unlikely.
My question is, is there really a big performance benefit in processing data in chunks, and is the DMA involved in moving the data around, besides just the final output to the DAC?
Thank You for the help.