High level question about block processing in Audio library

Status
Not open for further replies.

hemmer

Member
Hi there,

I'd like to understand how block processing happens at a high level when the audio "graph" contains loops, as in the case below (waveform2 gets fm from waveform1, waveform1 gets pwm from waveform2). Given that the Audio library processes AUDIO_BLOCK_SAMPLES at a time, I see the following options:

  1. always use data that lags by AUDIO_BLOCK_SAMPLES (each waveform uses the "cached" result of the previous block processing)
  2. arbitrarily (or otherwise by some set of rules), break the cycle and process (say) waveform1 then use the result for waveform2
  3. something else?

If it's 2. are there some rules I might understand?




Code:
#include <Audio.h>
#include <Wire.h>
#include <SPI.h>
#include <SD.h>
#include <SerialFlash.h>

// GUItool: begin automatically generated code
AudioSynthWaveformModulated waveformMod2;   //xy=491.20001220703125,327
AudioSynthWaveformModulated waveformMod1;   //xy=495.20001220703125,416
AudioOutputUSB           usb1;           //xy=772.2000122070312,391
AudioOutputI2S           i2s1;           //xy=772.2000122070312,444
AudioConnection          patchCord1(waveformMod2, 0, waveformMod1, 0);
AudioConnection          patchCord2(waveformMod2, 0, usb1, 0);
AudioConnection          patchCord3(waveformMod1, 0, usb1, 1);
AudioConnection          patchCord4(waveformMod1, 0, waveformMod2, 1);
// GUItool: end automatically generated code




// the setup routine runs once when you press reset:
void setup() {

  AudioMemory(256);

  int masterVolume = 1;
  waveformMod1.begin(WAVEFORM_SQUARE);
  waveformMod2.begin(WAVEFORM_PULSE);
  waveformMod1.offset(1);
  waveformMod1.amplitude(masterVolume);
  waveformMod2.amplitude(masterVolume);
}

// the loop routine runs over and over again forever:
void loop() {
  float knob_1 = 0.2;
  float knob_2 = 0.8;
  float pitch1 = pow(knob_1, 2);
  // float pitch2 = pow(knob_2, 2);
  waveformMod1.frequency(10 + (pitch1 * 50));
  waveformMod2.frequency(10 + (knob_2 * 200));
  waveformMod1.frequencyModulation(knob_2 * 8 + 3);
}


graph.jpg
 
Its extremely simple, the audio objects are run in the order they are declared, once each. No dependency analysis is currently done so that if
you declare a destination before its source, that object gets the block from the last run (typically that's a 2.9ms latency hit). If you
have a linear dataflow and declare the objects in that same order, no unnecessary latency happens. You always have to have at least one
block of latency for any cycle of course.

Currently the objects are chained linearly on their next_update link (see AudioStream.h in cores/teensy4/ for instance). There's a
comment about replacing this with proper dataflow analysis. The Audio lib tool however knows to generate sensible ordering for
objects and connections AFAICT so its only an issue if you hand-create objects in strange orders.
 
Its extremely simple, the audio objects are run in the order they are declared, once each.

AFAIK, it is the sequence of connections that defines the sequence of executions, not the sequence of object declarations.

Edit: best to check always SW
It seems MarkT is correct and the sequence of declarations is driving the sequence of execution
In AudioStream class (AudioStream.h) there is this snippet
Code:
			// add to a simple list, for update_all
			// TODO: replace with a proper data flow analysis in update_all
			if (first_update == NULL) {
				first_update = this;
			} else {
				AudioStream *p;
				for (p=first_update; p->next_update; p = p->next_update) ;
				p->next_update = this;
			}
Consequently MarkT is correct
 
Last edited:
Thanks both, very much appreciated. So to confirm, in the following example,

Code:
AudioConnection          patchCord1(waveformMod2, 0, waveformMod1, 0);
AudioConnection          patchCord2(waveformMod2, 0, waveformMod3, 0);

patchCord1 is declared first, and so the processing order is: waveformMod2 then waveformMod1, then waveformMod3 (as waveformMod2 has already been processed) - waveformMod1 and waveformMod3 will both use the block from waveformMod2 that was just processed?
 
Ah got it, thanks! This happens in the constructor - first_update is a static variable so will get filled in when the first AudioStream is initialised (and won't be null subsequently).

Still trying to build a mental model of it all, interesting stuff! :)
 
Thanks both, very much appreciated. So to confirm, in the following example,

Code:
AudioConnection          patchCord1(waveformMod2, 0, waveformMod1, 0);
AudioConnection          patchCord2(waveformMod2, 0, waveformMod3, 0);

patchCord1 is declared first, and so the processing order is: waveformMod2 then waveformMod1, then waveformMod3 (as waveformMod2 has already been processed) - waveformMod1 and waveformMod3 will both use the block from waveformMod2 that was just processed?

For other people coming to this thread later, I just wanted to be clear...no, the code above is not what sets the order that the calculations are performed. What sets the order ( per post #2 and the revised #3) is the order that the audio classes are instantiated (created), not the AudioConnections.

The instantiation of the audio classes is not shown in the snippet above, but they're presumably created as: waveformMod1, waveformMod2, and then waveformMod3. This will also be the order in which they will be updated. Because waveformMod1 is getting called before the data from waveformMod2 is ready, waveformMod1 will end up using audio data from the previous cycle. Hence, this introduces an extra latency in the overall processing. Usually, this isn't a big deal for most uses.

Chip
 
And for further future reference, if Paul pulls in my Dynamic Audio Objects (as he’s said he might), the execution order then will be governed by the “physical” connection order. Though it may turn out the library gets it sub-optimal for complex cases…

Cheers

Jonathan
 
Yeah good to clarify. Quote was from before the other poster's edit (I can't edit that post any more). Thanks all!
 
the execution order then will be governed by the “physical” connection order.
That means some Sketches will not work anymore, as they do now? (re: timing) That's incompatible.
And will that still work with Teensy LC without using more Flash and RAM? The LC is very limited..
 
I wouldn't rule out some sketches behaving differently, but I would guess that a dependency on exact execution order is rare. If a problem is reported then it should be possible to restore the original behaviour for static designs. For dynamic designs, assuming we take instantiation order as the equivalent of definition order, it would probably be counter-productive, because you'd almost certainly have objects instantiated after the output objects, which would introduce extra 2.9ms latency and audio block use.

Clearly more Flash and RAM will be needed. Whether that impacts LC designs materially remains to be seen: at the moment I haven't even touched the Teensy 3 family's AudioStream code, so it's academic. It's probably also a blocker to early deployment of Dynamic Audio Objects :)

I did say further future...
 
Status
Not open for further replies.
Back
Top