"External" delay with modulation inputs

h4yn0nnym0u5e

Well-known member
Hi folks

I've been tinkering with the AudioEffectDelayExternal object some more, and have added modulation inputs to it:
2023-06-26 22_20_44-Audio System Design Tool for Teensy Audio Library.jpg
This is very much preliminary, I'm sure there are many ways to break it horribly, but I thought I'd let any brave souls out there have a crack at it and see if they like it. You can find the code at https://github.com/h4yn0nnym0u5e/Audio/tree/feature/delay-modulation.

As you can see, it adds a modulation input for each of the taps, and a control function setModDepth(tap,milliseconds). Say a tap has a basic delay of 20ms, if you set the modulation depth to 5ms then when the modulating signal reaches +1.0, the delay will be 25ms; when it's -1.0, the delay will be 15ms. Clearly it will be bad news if the size of the delay memory is less than 25ms...

Here's some demo code:
Code:
#include "Arduino.h"
#include <Audio.h>

// 1000ms delay is 44100 samples or 88200 bytes: fits Teensy 4.1 heap no problem
// GUItool: begin automatically generated code
AudioSynthWaveformModulated wav;            //xy=282,236
AudioSynthWaveformModulated LFO1;            //xy=295,395
AudioSynthWaveformModulated LFO2; //xy=300,433
AudioSynthWaveformModulated LFO3; //xy=304,473
AudioEffectDelayExternal delayExt(AUDIO_MEMORY_HEAP,1000.0f);       //xy=483,429
AudioMixer4              mixerL; //xy=760,345
AudioMixer4              mixerR;         //xy=767,444
AudioOutputI2S           i2s;            //xy=959,395

AudioConnection          patchCord1(wav, 0, mixerR, 0);
AudioConnection          patchCord2(wav, 0, delayExt, 0);
AudioConnection          patchCord4(wav, 0, mixerL, 0);
AudioConnection          patchCord5(LFO1, 0, delayExt, 1);
AudioConnection          patchCord7(LFO2, 0, delayExt, 2);
AudioConnection          patchCord8(LFO3, 0, delayExt, 3);
AudioConnection          patchCord9(delayExt, 0, mixerR, 1);
AudioConnection          patchCord11(delayExt, 0, mixerL, 1);
AudioConnection          patchCord12(delayExt, 1, mixerR, 2);
AudioConnection          patchCord13(delayExt, 1, mixerL, 2);
AudioConnection          patchCord14(delayExt, 2, mixerR, 3);
AudioConnection          patchCord15(delayExt, 2, mixerL, 3);
AudioConnection          patchCord16(mixerL, 0, i2s, 0);
AudioConnection          patchCord17(mixerR, 0, i2s, 1);

AudioControlSGTL5000     sgtl5000_1;     //xy=959,442
// GUItool: end automatically generated code


uint32_t next;
void setup() 
{
  // sgtl5000_1.setAddress(HIGH);
  sgtl5000_1.enable();
  sgtl5000_1.volume(0.1);
  sgtl5000_1.lineOutLevel(14); // 2.98V pk-pk
  
  Serial.begin(115200);
  while (!Serial && millis() < 3000);
    ;
    
  Serial.println("Starting audio...");
  AudioMemory(40);

  mixerL.gain(0,0.71f);
  mixerL.gain(1,0.25f);
  mixerL.gain(2,0.1f);
  mixerL.gain(3,0.05f);
  
  mixerR.gain(0,0.71f);
  mixerR.gain(1,0.1f);
  mixerR.gain(2,0.2f);
  mixerR.gain(3,0.07f);

  Serial.println("Set up delayExt object");
  delayExt.delay(0,23.0f);
  delayExt.delay(1,57.0f);
  delayExt.delay(2,129.0f);

  wav.begin(1.0f,220.0f,WAVEFORM_TRIANGLE);

  // Set up modulation
  LFO1.begin(1.0f,1.1f,WAVEFORM_SINE);
  float md = delayExt.setModDepth(0,1.00015f); // close to this depth...
  Serial.printf("delayExt modulation depth is %.5fms\n",md); // ...but not quite!
  
  LFO2.begin(1.0f,0.7f,WAVEFORM_SINE);
  delayExt.setModDepth(1,2.0f);
  LFO3.begin(1.0f,0.2f,WAVEFORM_SINE);
  delayExt.setModDepth(2,5.0f);

  next = millis() + 5000;  
}

int count;
void loop() 
{
  if (millis() > next)
  {
    next += 5000;    
    delay(10);
    Serial.printf("Usage %.2f, max %.2f\n",AudioProcessorUsage(),AudioProcessorUsageMax());
    AudioProcessorUsageMaxReset();
  }
}
Currently it only does linear interpolation between samples, but it doesn't sound too bad to my ear. Perhaps someone who knows what they're doing can provide some rather more objective analysis...

As it's based on my existing modifications, it can be used with SPI RAM on the audio adaptor (both 23LC1024 and PSRAM), a couple of the multi-chip SPI memory boards, PSRAM on the Teensy 4.1, and heap memory. The repository has an updated Design Tool with the ability to place it on your design, and some documentation to get you going.
 
Awesome job, I'm going to test it as soon as I get the RAM chips I ordered!
Great news, please do report back. Note that you can have a quick play before your RAM arrives, it's capable of using heap memory which is plenty (on a Teensy 4.x for sure) for simple effects processing.
 
Hello from here. Which of the files would I need to pull across to try this on a bare Teensy 4 (no external memory) - the effect .ccp +.h and extreme.h + .cpp - any others?

cheers, Paul
 
Hi Paul. Yup ... barring typos! There's also an updated Design Tool which allows you to place the new object, wire the modulation inputs, and export the result, and has documentation, but that's entirely optional. For a bare Teensy 4.x your only memory option is heap, so you'd edit the design to something like AudioEffectDelayExternal delayExt1(AUDIO_MEMORY_HEAP,1000.0f); for a one second maximum delay. You could then e.g. set a tap for 950ms delay and modulate by up to 50ms - the setModDepth() function tries to prevent you setting an impossible depth... As you'd expect, each tap has its own base delay and modulation depth, and of course the modulation signal is external anyway.

For your Lexicon emulator, I think i'd extend the AudioExtMem class with a "scatter load" function, which would be passed 128 indexes into the delay memory, and a pointer to enough memory for 256 samples. It would then load [index] and [index+1] for all the requests, allowing interpolation. Heap or EXTMEM scatter loads would be trivial, and SPI memory loads would need some optimisation, invisible to the caller. You might get away with extreme modulation using heap, but it'll still be very sub-optimal the way it's coded now.
 
Hello h4yn0nnym0u5e and the group - First of all, thank you for making this modulation delay! I am new to Teensy, having played with the original Arduino a few years back with some success in making primitive 8-bit audio delays and such. The Teensy is of course way more powerful and not a lot larger, which is my main attraction to using it.

Having said that, I am not a coder by a long stretch, having been an audio electronics hardware tinkerer/builder for several decades but done very little coding by comparison. I find the learning curve is steep and often frustrating as there is a lot of in-language that one needs to learn to even understand what people are talking about, or how to do even the seemingly simplest thing. But I'm working at it!

Having said all of that, I recently bought a Teensy 4.0 and audio shield and have been having some fun playing with getting the audio tutorial projects to work. My main goal for now with the Teensy is to make a basic modulated delay block that I can use as a core for some guitar and/or studio effects projects. I had a chance to try out your modulated external delay and got it working. I did have some questions about it and/or suggestions (many of which stem from my non-coder/hardware-oriented way of thinking).

I found that the settings were a little non-intuitive in terms of delay time and I couldn't quite make out if the modulation I was hearing made "sense" to me (in terms of what I would expect it to sound like). I tried using different modulation LFO waveforms and when I used triangle or bandwidth limit triangle (my prefered LFO when doing hardware modded-delay effects) I got more of a constant pitch shift up and then down alternatively (which I think makes sense mathematically because the slope of the LFO determines the pitch shift (I think that's correct?) A sine sounded more natural but still there is something not "right" about it. I am wondering if what I am hearing is the effect of linear interpolation?

I have been reading some about various interpolation techniques for modding delay lines and I am confused as to what would sound most "natural" to me. Note that my go-to sonic reference on this is the sound of a hardware delay line where the sampling rate is varied by wiggling the system clock up and down. That to me is the sound I am after and from what I understand, the interpolation technique one chooses contributes to the sound.

I would suggest that a delay module should have built-in feedback and feed-forward paths to avoid cluttering the code with added mixers.

Another suggestion would be to somehow incorporate a phase shift setting for the LFO so that you can do things like stereo chorus or tri-stereo chorus (the infamous 80's rack chorus that had three delay lines with three LFO phase taps 120 degrees apart) Perhaps this can be done with the LFO module as it sits ? Do multiple LFOs in the same program run with a fixed phase relationship to one another?

Why do some of the parameters use an "f" suffix (like mod depth is specified as "1.0f" - I'm confused as to what the "f" is doing, and also is this parameter in milliseconds? I would think mod depth should/could be specified as a percentage of delay time (which obviously could be in the final code as a variable, but I would think it more intuitive to put into the effect itself).

Also I could not find a link to the Teensy Audio Design Tool that incorporates the modulated external delay.

Sorry for the long winded ramblings - I appreciate your efforts and perhaps if I get the hang of this coding thing, I can make a contribution to the art myself some day.

Regards,
Dave
 
Hi @polaris26 - thanks for taking a look at this, always good to have feedback.

Simple things first: appending the 'f' to a constant tells the compiler to create a float rather than a double. As the CPU can deal with floats in hardware, this is slightly more efficient - or so I'm told. Stuff works without, so for the most part it's not worth worrying about.

Your observation about the results when using a triangle LFO are spot on. During the rising slope you'll get an increasing delay, which effectively means resampling the input at a slower rate, and thus lowering its pitch; and the opposite during the rising slope. There will be a more or less abrupt change at the peak and trough, depending on whether or not you used the band-limited version. It is what it is, if you don't like the sound you probably need a different modulation waveform! Another thread on the forum has mention of adding a filter between the LFO and the modulation target, you could perhaps try that? Note the BiQuad documentation says it's not great "under about 400Hz", so the StateVariable is probably the one to go for. Another thing to try might be the AudioEffectWaveshaper.

I believe various people have looked at synchronising the phase of the various waveform sources - they're not inherently synchronised. (In fact, there's not really any such thing as an LFO in the ecosystem, just a digital waveform source you can run at a very wide range of frequencies!) A bit of tinkering should get you your 3 oscillators at 120° ... if not, someone around here will have Relevant Wisdom. The waveform sources are typically fairly economical on CPU power, so having lots of them isn't an issue.

As delay times were already specified in milliseconds, I stuck with that for consistency in the API. It's just maths: you're welcome to present it on your user interface as a percentage or any other unit of choice...

The Design Tool included in the modified library (Audio/gui/index.html) gives you the ability to place the updated AudioEffectDelayExternal object. It's vaguely annoying that the tool can't pick up the objects' capabilities from the source files, but I suspect that's difficult or impossible :(

My design philosophy is very much to put minimal functionality into any individual object, so building in feed-forward and back paths just to avoid clutter isn't something I'd do, unless it added capability that simply couldn't be done by wiring in objects externally.

I didn't put much work into the interpolation, so it's entirely likely it could be improved. How much difference it'd make sonically I don't know, I suspect the big wins for your use case are more likely to be found in getting the modulation just right. I don't fully understand
the sound of a hardware delay line where the sampling rate is varied by wiggling the system clock up and down
Maybe you could post a really simple example? Use a triangle wave as the source, and show both clean and effected (but not mixed) traces, so we can see how the delay shifts over time. A link to a specific pedal and how it works might also be instructive.
 
Hello h4yn0nnym0u5e - Thanks for the detailed response. I spent more time playing with the modulated delay effect as well as trying to get my head around some of the more basic general concepts at work here. As I said I am not a coder, so a lot of this, especially the language/terminology, is new to me.

I did eventually get multiple LFO's working and synch'ed together. I encased the LFO phase-setting statements in AudioNoInterrupts/AudioInterrupts in the thought that that would ensure all three get the right relative phase from the start, and from what I can tell (by ear) it did seem to work. Also some further tinkering with the waveform types and such and I got what sounded to my ear like a pretty good delay modulation.

My ultimate goal would be to convert each of my sketches that represents a basic function with features I like (like delay, vibrato, whatever) into an "effect module" that could be easily joined to other sketches in some way through a GUI (instead of having to merge the codes together manually, which is a bit tedious). I guess that would involve a lot more work on the GUI side of things.

I did manage to modify the index.html to include your delay with modulation (as well as Pio/hexeguitar's excellent-sounding Plate Delay that was published a while back but not added to the library). I was editing the html file manually and it was a little cumbersome, but I got it done.

I would like to understand how to make an effect ("audio object") from scratch or even start by modifying an existing one, but when I look at the code for the effects, it's still mostly gibberish to my eyes. Even something simple like the 4-channel audio mixer. I did attempt to read the PJRC page on "creating new audio objects" but I really get lost in the sauce there. I'm not even sure where else to start learning this stuff, but I'll keep at it.

Regards,
Dave
 
Glad you’re making progress.

The audio library isn’t the easiest thing to get your head around, especially if you’re “not a coder“, and harder still to build your own objects. But no reason you shouldn’t get there in the end. The documentation is a bit scattered but some does exist, and folks on this forum are pretty helpful.

If you’re just using the standard libraries to begin with, note that you can‘t freely create and destroy the objects using new and delete, but you can do so with AudioConnection objects, and connect and disconnect them. Any object that has no connections takes no CPU time. So you could make a sketch with a preset selection of effects objects, and flip through different connection topologies / settings with a moderately simple GUI.
 
Back
Top