Bug / limitation in the Audio library w.r.t. effect order?

Blackaddr

Well-known member
I have noticed a potential bug / limitation regarding the order in which audio effects (classes which inherit AudioStream) are called.

I was seeing much greater latency then I should, and the latency was changing with the number of effects in the chain. I eventually narrowed this symptom down to the following sequence:

- AudioStream constructors append themself to the update list.
- The order they appear in this list is based on the order they are instantiated in the code, since that's the orders the constructors are called.
- The update_all() software ISR will call their individual update() functions based on the order they appear in the list, not the order they are connected. For multichannel chains with mixers, this is a complicated graph that requires precise traversal order.

Most people probably don't notice because they naturally declare their objects in the order they intend to the connect them and don't have much multi-channel chaining going on. However, if you declare them in a different order then they will be connected, I suspect their update() functions will be called in the wrong sequence. For me, this oddly seemed to manifest itself as additional latency, but I think it's better to say when connection order doesn't match declaration order, results are undefined? I have never noticed obviously corrupted or wrong audio, but I definitely see a manifestion of increased latency.

In my case, I have an advanced AudioStream class object that provides some dynamic signal chain changes, so I cannot simply instantiate in a fixed order that matches the audio processing order.
 
Last edited:
This is true, and documented at https://www.pjrc.com/teensy/td_libs_AudioConnection.html. As you noted in the code in AudioStream.h, every time a new object is created it's linked at the end of the update list, regardless of its connections. If its output is connected to an object that was created earlier, the transmitted audio block is only received on the next audio update cycle, resulting in an extra 2.9ms latency, and a requirement for an extra block in AudioMemory(). The Audio System Design Tool uses the screen layout to determine the creation order, so you can make a right muck of it if you deliberately misplace the objects!

But there is Good News... I've done some work on making the audio library dynamic, so you can freely create and destroy audio objects and connections using new and delete; as part of that, I've tried to ensure that objects get updated in connection order, not creation order. It's not always possible (if you put a loopback in your design), but I think it works OK in the majority of cases. The thread to discuss it is here, and contains links to my repos - you need to modify both the cores and the audio library. If you can adapt your "advanced AudioStream class object that provides some dynamic signal chain changes" then you may find you get better results. There are a few items to be aware of, happy to support you on this thread or preferably (if the answers may be useful to a wider audience), the other one.
 
Hey h4yn0nnym0u5e, thanks for the info and pointing to the docs so I know I'm not going crazy. I've worked with a couple other audio frameworks before (primarily JUCE) and they use graph traversal to figure out how to call the audio processor loops in the right order. I just naturally assumed Teensy Audio did something similar using info from the connection objects, so that's my bad for assuming that.

As for those threads, sounds like a bunch of us have generally similar goals. Unfortunately I cannot use a solution that involves new/delete as memory fragmentation will be a real concern for me. My 'advanced AudioStream class' is large x-point switch, kinda similar to what's discussed in the thread by using mixers, but it looks like I will need to modify mine along with the AudioStream class to allow the x-point switch to be re-entrant.

In other words, as you follow the signal chain from first input to final output, you will pass through the x-point switch more than once, hence it's update must be re-entrant and only partially process inputs that are ready. And of course the other effect update() functions need to be called in the correct order according to graph traversal.
 
Sorry, that was probably a red herring … you’re not obliged to use new and delete!

Recent Teensyduino releases have added dynamic connections, but these don’t affect update order. The dynamic library, however, does have a crude graph traversal algorithm, such that if object B is not already in the active update list, and it’s connected to object A that is, then B is linked in to the update list before A if it’s a source for A, or after, if it’s the destination. Unless it’s subsequently completely disconnected, its position in the update list is then fixed.

Not being a mathematician I have no idea if this is guaranteed to give a reasonable execution order; as a pessimist, I suspect not. There is also no way round the fact that a cycle in the graph must result in added latency, because each active object must be updated exactly once on every update cycle.

It’s possible (hard to tell without a diagram of your proposed topology) that you could get your desired result by having N effects fed by N mixers which each have N (or N-1) inputs. There’s an N-input mixer in the dynamic audio library, which makes this easier, and mixers are very efficient for disconnected inputs or channels with 0 or 1 gain. I can’t figure out how many AudioConnection objects you’d need - I’d guess less than N^2.

You might need to be clever when re-patching, in order to get the optimal update flow…
 
I forgot to say ... another option in the dynamic library is for objects not on the update list to not be updated, so they don't consume CPU time. By default all existing objects do update, because there are some unexpected results than can occur if they don't, but it's potentially of use if you have multiple expensive objects available, e.g. reverb, ladder filter.
 
Back
Top