Amplitude modulation and generic modulation component

kenjib

Member
Hello everyone,

Apologies in advance if this has already been addressed but I couldn't find it with search. I am designing a device that includes both a guitar effects processor and subtractive synthesizer. My question is about the synth part. I am trying to do a basic amplitude modulation where the VCA is modified by an LFO (a.k.a. tremolo). So in it's most basic form it would be something like this:

ampmod.png

Before I start coding stuff from scratch by inserting stuff into the main program loop and making things really messy and ugly, is there a way to have the LFO modulate the signal coming from the sawtooth wave using the design tool? It seems like what is needed here is a general modulation component in the Design Tool. I.E. a general component that takes two inputs, i.e. carrier and modulator, and then outputs the resulting signal. I think that would be a really useful item. Does that already exist somewhere and I just don't see it or, alternatively, is there some other way that I can add this feature in a clean way?

Thanks!

-Kenji
 
That worked perfectly. Thank you!!!! I am loving the Teensy with Audio shield and the audio library. It makes it so quick and easy to get something working.

If you don't mind, can I ask another question about this project? The structure is that there is a guitar input which goes through two effects inserts, with a synthesizer engine also being added to the mix when one of the effects calls for it.

effectChain.png

The handler function is as follows (effect1InLeft and effect2InLeft are record queues while effect1OutLeft and effect2OutLeft are playback queues):

Code:
void processAudio(Effect effect1, Effect effect2) {
  if (effect1InLeft.available() >= 1) {
    int16_t *audioBuffer = effect1OutLeft.getBuffer();
    memcpy(audioBuffer, effect1InLeft.readBuffer(), 256);
    effect1InLeft.freeBuffer();
    effect1.processEffect(audioBuffer);
    effect1OutLeft.playBuffer();
  }

  if (effect2InLeft.available() >= 1) {
    int16_t *audioBuffer = effect2OutLeft.getBuffer();
    memcpy(audioBuffer, effect2InLeft.readBuffer(), 256);
    effect2InLeft.freeBuffer();
    effect2.processEffect(audioBuffer);
    effect2OutLeft.playBuffer();
  }
}

So if the buffer is the default 128 for the record and playback queues, how does latency work with multiple queues like this? The 128 sample buffer adds about 3ms of latency, right? How is latency effected by running two of these record/playback pairs in series?

Thanks again,

-Kenji
 
Back
Top