Inner workings of the audio library

Status
Not open for further replies.

adge

Member
Hello folks!

I'm wondering if there is a writeup about the inner workings of the audio library. I'm very interested in realtime audio programming and would therefore like to know how exactly it works.

Are there any resources or do I need to dig deep into the Audio librarys source code?

Thank you ;)
 
To understand every little detail, you would have to gig into the source code. But before, you should experiment with the audio design tool on the PJRC website. The explanations and details about every audio object in the right sidebar might give you a first rough understanding. Then, there is the documentation about creating new objects for the audio library...

And still about real-time: The audio library does its processing in blocks of 128 samples which means that there is an inherent delay of at least 2.9ms.
 
Is the library designed in a dsp manner? I guess so because thats what the objects represent right? Modules you can connect.

I think"real time dsp" is never real time right? You are always feeding data into a circular buffer that is then read from the output. If I would like to understand the audio guts would it need me to just look at the Audio library or do I need anything else? I guess the audio library is just triggered in a while loop. Do you know at which frequency?
 
"realtime" depends on the definition of "realtime".. with 44khz samplerate: What is realtime? If it's a defined time to react to external event... sure.. if it means "fast"...sure..but what is fast? :)
Excuse my philosophical excursion :rolleyes:

The frequency is the time to play 128(default value) samples at 44.1Khz(default).
Take a look at the sources. Begin with a very simple object.
 
Is the library designed in a dsp manner? I guess so because thats what the objects represent right? Modules you can connect.

I think"real time dsp" is never real time right? You are always feeding data into a circular buffer that is then read from the output. If I would like to understand the audio guts would it need me to just look at the Audio library or do I need anything else? I guess the audio library is just triggered in a while loop. Do you know at which frequency?

There is no while loop. It’s all about objects and update responsibilities. The output objects consume data at a fixed timing, the sample rate. As one buffer is consumed, an update request is triggered at the preceding object in the chain to fill the buffer up again. Thus, everything is rather event driven on a request and following update strategy.
 
"realtime" depends on the definition of "realtime".. with 44khz samplerate: What is realtime? If it's a defined time to react to external event... sure.. if it means "fast"...sure..but what is fast? :)
Excuse my philosophical excursion :rolleyes:

Internal: There is a global instance which calls "update() for every object. The in- and outputs have additional timers (ISRs) and dma

While you take a philosophical approach, I’ll take a discrete academical approach: Real time means for me that any input change would have an immediate effect on the very next output sample.
 
Last edited by a moderator:
In this case: no, not realtime. It has immedate effect to the next played 128-sample block, but not the next sample.
It's possible to reduce the blocksize to 16 samples (increases overhead slightly).
 
There is no while loop. It’s all about objects and update responsibilities. The output objects consume data at a fixed timing, the sample rate. As one buffer is consumed, an update request is triggered at the preceding object in the chain to fill the buffer up again. Thus, everything is rather event driven on a request and following update strategy.

What? The processor is executing commands. One after another right? There has to be some "update audio" hook that is called every cycle...or not?
This is an unconventional approach right? I've never seen....well I have not seen much but I always thought that there is something repeatedly going on. Hierarchical very deep. And from there e.g. a sine is processed and written into a circular buffer. And something else just reads from this buffer. Any resources so I could teach myself on that subject? Low level wise?

And which .ccp file in the audio library starts everything?
 
Arduino\hardware\teensy\avr\cores\teensy3\AudioStream.cpp in the core - not library.

But you don't need to know that in this detail - for the beginning, it's sufficiant to look at a simple audio object. play_memory for example.
 
Arduino\hardware\teensy\avr\cores\teensy3\AudioStream.cpp in the core - not library.

But you don't need to know that in this detail - for the beginning, it's sufficiant to look at a simple audio object. play_memory for example.

Yeah I know. I already messed around with the gui audio design too. Its super easy to glue some parts together. However I always have the urge to understand how the glue and the parts themselves work ;)
 
Does anybody know how I would do wave sequencing synthesis with the audio library? Sequencing parts of waveforms in a row that is then played as a whole... The old nintendo worked like this :D
 
Do you mean playing parts of predefined wavesforms in memory?
I think the play-memory can be modified to do this.
 
You can easily get an effect similar to this diagram from 3:18 in the video.

sc.jpg

To accomplish the cross fading, you'd use a design similar to this:

sc.png

In your code, you'd configure those sample players or waveform objects to create the short sounds you want.

Set all the mixer channels to 1.0 gain, since you'll be using the faders to attenuate the signals so the add up to at most 1.0 before mixing.

Then when you want to cross fade between them with a 40 millisecond transition time, you'd use code like this:

Code:
  AudioNoInterrupts();
  fade1.fadeOut(40);
  fade3.fadeIn(40);
  AudioInterrupts();

The important step here is AudioNoInterrupts() surrounding the setting of both fade objects, so they will be certain to begin their transitions at the same instant.

If you want the waveform to start at a precise point within its cycle, you'd also put that include the code protected by AudioNoInterrupts(), so the fading in waveform synthesis begins at exactly the designed phase angle (0 to 359.999 degrees) as the fading in begins.

Code:
  AudioNoInterrupts();
  fade1.fadeOut(40);
  waveform1.phase(180.0);
  fade3.fadeIn(40);
  AudioInterrupts();

Of course, you can also configure for any of the 9 waveforms and their various settings. If using a modulated waveform, you can configure it and the waveform that's modulating its frequency, phase or shape (if pulse or variable triangle) so you have exact control over exactly how the waveform begins at the exact moment the 40 ms (or whatever time you configure) cross fade begins.
 
Having said all that, I believe it's important to emphasize how tedious and painstaking this sort of sound design can be. The video mentions 5 weeks. If you do manage to create something this way that sounds good, I really hope you'll share.
 
Thank you Paul this information is extremely helpful. We'll see if I can come up with something interesting. My plan was to build a small synth that is capable of a few different sound synthesizing methods. Mostly just for learning.

I still have one Question though. Can I somehow specify how many cycles a wave should be played before faded out? Or should I just calculate the play time with the waves frequency it needs to pass e.g. 3 cycles?

Anyway thank you for your information.
 
Can I somehow specify how many cycles a wave should be played before faded out?

No, not directly.

Using this method, you would wait (by delay or elapsedMillis or other ways of measuring elapsed time) and then call the fadeOut(ms) function at the right moment.

If you want it all to happen automatically, you could configure the envelope object to have a zero (silent) sustain level, and use the attach, hold and decay times to implement your intended fade in and out.
 
Allright. I find this stuff super interesting. Are there things that are not, or hardly achievable with this library? Not really right since we can create custom objects too.
Are all audio libraries designed like that? It must be extremely tedious to come up with something complex like that that brings you maximum functionality and freedom without having to redesign everything. So it can really fit to most needs.

And how does AudioNoInterrupts() work? I guess there is no multithreading right? So how can it perform two actions simultaneously?
The documentation says it disables the audio libraries update interrupt. Whatever that means. Why would you intereupt the update step of the audio library?

Isn‘t there something like:

while (teensy)
{
handle events
process libraries (e.g. audio)
}
 
Last edited:
Status
Not open for further replies.
Back
Top