Roadmap "Dynamic Updates": any effort going on?

Hello h4yn0nnym0u5e, I'm lookin' for some help again.

I'm successfully using your library to create and delete synthesis objects. I've copied what you did in PlaySynthDynamic.ino and voice.h which is used by that example.

Now I'm trying to do the same with effects. Basically have a duplicate effects.h like your voice.h. The problem I'm running into is effects have inputs to the audio objects, not just outputs. In your example you connect synthesis objects as follows:
Code:
waves[chan]->connect(mixArray[chan&3],chan>>2)?"nope ":"");
That does not have an option to connect an input and effects need input connections. I tried to rewrite the code you have in voice.h to have it accept an input using my connect2 function but I can't get it to work.

Can you point me in the right direction?

My effects.h, which does not work, is below. (I did temporarily put a wave generator within the effects.h to make sure that the class works if I'm not trying to use an input, and it does.)

Code:
#include <Audio.h>
#include "AudioStream.h"

class SynthEffect
{
  public:
    AudioConnection outputCord;
    AudioConnection inputCord;
    virtual AudioStream& getOutputStream(void) = 0;
    virtual AudioStream& getInputStream(void) = 0;
    virtual ~SynthEffect(){};    
    int connect(AudioStream& str) { return connect(str,0);}
    int connect(AudioStream& str, int inpt) {return outputCord.connect(getOutputStream(),0,str,inpt);} 
    int connect2(AudioStream& str, int oupt) {return inputCord.connect(str,0,getInputStream(),0);}
    virtual void setparam(byte param, float input1) = 0;
    virtual void setparam(byte param, float input1, float input2) = 0;
    virtual void setparam(byte param, byte input1, float input2) = 0;
    virtual void setparam(byte param, byte input1, float input2, float input3) = 0;
    virtual void setparam(byte param, byte input1, byte input2) = 0;
};

class EffAmp : public SynthEffect {
    public:
        AudioAmplifier amp;
        AudioMixer4              mixer;
        AudioStream& getInputStream(void) {AudioStream& result {amp}; return result;};
        AudioStream& getOutputStream(void) {AudioStream& result {amp}; return result;};
    private:

    public:
        EffAmp()  
        {

            amp.gain(1.0);
//            wav.begin(.3,440,0);
        };
        ~EffAmp() {};
        void setparam(byte param, float input1) {
        };
        void setparam(byte param, float input1, float input2) {
        };
        void setparam(byte param, byte input1, float input2) {
        };
        void setparam(byte param, byte input1, byte input2) {
        };
        void setparam(byte param, byte input1, float input2, float input3) {
        };
};
 
Can't see any reason your application wouldn't work, though of course you didn't attach it! So I wrote one:
Code:
#include "effect.h"

AudioSynthWaveform* pWav;
EffAmp* pEffAmp;
AudioOutputAnalogStereo* pDAC;
AudioConnection* c[10];

void setup() 
{
  while (!Serial)
    ;
  AudioMemory(10);
  
  Serial.println("Start!");
  pWav = new AudioSynthWaveform;
  pEffAmp = new EffAmp;
  pDAC = new AudioOutputAnalogStereo;

  // Use EffAmp functions to make connections
  pEffAmp->connect2(*pWav,99999); // oupt parameter is ignored!
  pEffAmp->connect(*pDAC,1);
  
  c[0] = new AudioConnection;
  c[0]->connect(*pWav,0,*pDAC,0); // effects bypass!

  pWav->begin(0.95,220.0,0);

  Serial.print("Running");
}


uint32_t next;
float level = 0.2f;
void loop() {
  if (millis()>next)
  {
    next = millis() + 500;
    Serial.print('.');
    level = 1.0f - level;
    pEffAmp->setparam(0,level);
  }
}
To make this Work As Intended, I made one change to your effects.h:
Code:
void setparam(byte param, float input1) {
    [COLOR="#FF0000"]amp.gain(input1);[/COLOR]
};
You'll presumably have to change the output setup to match your system - I just happen to have a Teensy 3.5 on the bench at the moment, so used its DACs. Note also my preference for test code not to start until I've opened the serial connection - catches me out sometimes!

You should see the raw wave signal on the left output, and the version that runs through the effects on the right. To prove it, the loop() toggles the EffAmp.amp object's gain between 0.2 and 0.8 every 500ms.

I did notice one minor problem in your SynthEffect class:
Code:
int connect2(AudioStream& str, int [COLOR="#FF0000"]oupt[/COLOR]) {return inputCord.connect(str,0,getInputStream(),0);} [COLOR="#FF0000"]// oupt is ignored![/COLOR]
Guess that could be the problem you've observed?
 
That was awfully generous of you! I compiled your code (while changing the output) and it worked perfectly. You are correct on the output being ignored, thank you for pointing that out.

Now I just need to find out where I bungled my code that it isn't working in my project. You've eliminated a rabbit hole I would have spent a lot of time going down.

Hopefully sometime I can return the favor.
 
You're welcome - pass it forward!

It's good to have a chance to look at what a Real User is doing, to make sure there aren't gremlins left in which I'd never spot because I wouldn't do it that way. In this case there weren't, but next time...
 
I've just tagged release v0.8-alpha of the cores and Audio libraries. These are my first try at Teensy 3.x support. I'm not sure how practical they'll be on LC and 3.2, but I've been thrashing them a bit on a Teensy 3.5 alongside the OSCAudio stuff and they just about fit, and seem to work OK.
 
I've just tagged release v0.9-alpha of the cores library. This prevents (I hope) a crash primarily associated with the variable-width mixers AudioMixer and AudioMixerStereo (in DynMixer.cpp and .h), but which could in principle happen with other AudioStream-derived classes. No change to the Audio library is needed for this.

Note that my test code currently has a memory leak somewhere. I don't know where yet, so it could be in the cores or Audio libraries, though I suspect not.
 
I've just released v0.10-alpha of the audio library, which is a first try at making AudioEffectDelayExternal properly compatible with the other dynamic objects: previously it would have leaked delay memory something 'orrible... I also took the opportunity to add a few more memory options, so you can now easily use an 8MB PSRAM on the audio adaptor, or better still (because it's faster), use EXTMEM.

I haven't given this much testing, so please do report if you get grief from it!
 
hi @h4yn0nnym0u5e
I'm trying to use your dynamic audio library but at the moment there's no sound at all,
I'm using VsCode with platformIO,
replaced the core in .platformio/packages/framework-arduinoteensy
replaced the audio library too
it compiles ok,
if you have any clue about what to check would be much appreciated
 
Hmmm ... I don't have any experience of PlatformIO, I've been working using Windows and the Arduino IDE to ensure things work for the majority audience. I assume...

You should only need to replace AudioStream.cpp and .h in cores, and do whatever's needed to replace the static Audio library with the dynamic one (e.g. in Arduino-land, put a copy in <sketchbook-location>\libraries\Audio). Given it's compiling OK, it sounds like you have this at least partially correct: dynamic Audio with non-dynamic cores gives the error SAFE_RELEASE_INPUTS() not declared in this scope ... a lot. However, a quick test shows that a non-dynamic Audio library with dynamic cores compiles and runs OK, for a simple application. So it's probably worth checking that PlatformIO is picking up the correct library.

It may be worthwhile reverting to the static configuration, building and testing a simple known-good application to show it works, then trying it with the dynamic configuration. It's intended to work exactly the same if you just use it without taking advantage of the dynamic capabilities, so old software still runs. Also, if you have the Arduino IDE, give it a quick try on a simple app (both ways). The results of those 4 tests should help us find what's going on.

A good check of whether the audio updates are occurring is to use an AudioRecordQueue object connected where you know it's getting data (e.g. an AudioWaveform that's definitely producing output); don't forget to call queue.begin(), then you should find queue.available() goes non-zero every 2.9ms - just use queue.readBuffer() and queue.freeBuffer() to make sure it doesn't eat all the audio blocks!
 
Thanks @ h4yn0nnym0u5e
it works now,
the problem was the file control_cs42448.cpp
I'm using a slightly modified version, the original one does not include the "magic bit" fix (which I found in this forum) that makes it work,

Code:
bool AudioControlCS42448::enable(void)
{
	Wire.begin();
	// TODO: wait for reset signal high??
	if (!write(CS42448_Power_Control, 0xFF)) return false; // power down
	if (!write(CS42448_Functional_Mode, default_config, sizeof(default_config))) return false;
	
	// set the magic bit!
    write(CS42448_Functional_Mode, 0xF4 | 0x01 );

	if (!write(CS42448_Power_Control, 0)) return false; // power up
	return true;
}

thanks a lot for your quick answer
 
Great news. I've just pushed a slightly unofficial commit up which gives you the option in your sketch: enable() behaves the old way with no setting of the magic bit, or enable(true) will set the bit.

I've also popped in a few tweaks to the envelopes (the old linear one and my new exponential one) which fixes issues with their behaviour when they don't get a block transmitted to them, and also the old one if you keep calling noteOff() when it's already in the release phase. Oh, and the fix for using an integer parameter for the envelope release() function.
 
Hi h4yn0nnym0u5e,
I making some changes in the synthVoise class to connect also inputs to the synth/effect,
the inputs for modulation are fine but the with the audio input of an effect I'm facing an issue, no idea how to connect 2 of those objects in series,
could you suggest a method for that?
Code:
/*********************************************************************************/
class SynthVoice
{
  AudioConnection outputCord;
  AudioConnection inputCord; // mod
  virtual AudioStream& getOutputStream(void) = 0;
  virtual AudioStream& getInputStream(void) = 0;
  virtual AudioStream& getInputStream2(void) = 0;
  public:
    virtual ~SynthVoice(){};
    virtual void noteOn(float freq, float vel, int chan=-1) = 0;
    virtual void noteOn(int MIDInote, int MIDIvel, int chan=-1) = 0;
    virtual void noteOff(void) = 0;
    virtual void setParam(int param, float val) = 0;
    virtual bool isPlaying(void) = 0;
    int connect(AudioStream& str) { return connect(str,0);}
    int connect(AudioStream& str, int inpt) {return outputCord.connect(getOutputStream(),0,str,inpt);}  
    int connect2(AudioStream& str) { return connect2(str,0);}
    int connect2(AudioStream& str, int oupt) {return inputCord.connect(str,0,getInputStream(),oupt);}  
    int connect3(AudioStream& str) { return connect3(str,0);}
    int connect3(AudioStream& str, int oupt) {return inputCord.connect(str,0,getInputStream2(),oupt);}  
};

/*********************************************************************************/
class WaveformModulated final : public SynthVoice
{
    AudioSynthWaveformModulated wave;
    AudioMixer4 modMixer;
    AudioMixer4 shapeMixer;
  
  static short wave_type[6];
    
    AudioStream& getOutputStream(void) {AudioStream& result {wave}; return result;};
    AudioStream& getInputStream(void) {AudioStream& result {modMixer}; return result;};
    AudioStream& getInputStream2(void) {AudioStream& result {shapeMixer}; return result;};
  
  private:
    AudioConnection cord1;
    AudioConnection cord2;
  
  public:
    WaveformModulated() : cord1(modMixer,wave) , cord2(shapeMixer,0,wave,1)
    {};
    void noteOn(int MIDInote, int MIDIvel, int chan){};
    void noteOn(float freq, float vel, int chan)
    {
      wave.frequency(freq);
      wave.amplitude(vel);
    }
    void setParam(int param, float val)
    {
      enum param_
      {
        waveform,
        frequency,
        amp,
        frequencyModulation,
        phaseModulation,
        modCV,
        modENV,
        modLFO,
        shapeCV,
        shapeENV,
        shapeLFO,
      };

      switch(param)
      {
        case waveform:
          //wave.begin(1.0, 1000.0, wave_type[short(val)]);
          wave.begin(wave_type[short(val)]);
          break;
        case frequency:
          wave.frequency(val);
          break;
        case amp:
          wave.amplitude(val);
          break;
        case frequencyModulation:
          wave.frequencyModulation(val);
          break;
        case phaseModulation:
          wave.phaseModulation(val);
          break;
        case modCV:
          modMixer.gain(0, val);
          break;
        case modENV:
          modMixer.gain(1, val);
          break;
        case modLFO:
          modMixer.gain(2, val);
          break;
        case shapeCV:
          shapeMixer.gain(0, val);
          break;
        case shapeENV:
          shapeMixer.gain(1, val);
          break;
        case shapeLFO:
          shapeMixer.gain(2, val);
          break;
      }
    }

    void noteOff(void){};
    bool isPlaying(void) {return true;};

};

short WaveformModulated::wave_type[] = {
    WAVEFORM_SINE,
    WAVEFORM_TRIANGLE_VARIABLE,
    WAVEFORM_BANDLIMIT_PULSE,
    WAVEFORM_BANDLIMIT_SAWTOOTH,
    WAVEFORM_BANDLIMIT_SAWTOOTH_REVERSE,
    WAVEFORM_BANDLIMIT_SQUARE
    // WAVEFORM_SINE,
    // WAVEFORM_SQUARE,
    // WAVEFORM_SAWTOOTH,
    // WAVEFORM_TRIANGLE
    };

/*********************************************************************************/
class FilterLadder final : public SynthVoice
{
    AudioFilterLadder wave;
    AudioMixer4 modMixer;
  
  static short wave_type[6];
    
    AudioStream& getOutputStream(void) {AudioStream& result {wave}; return result;};
    AudioStream& getInputStream(void) {AudioStream& result {wave}; return result;};
    AudioStream& getInputStream2(void) {AudioStream& result {modMixer}; return result;};
  
  private:
    AudioConnection cord1;
  
  public:
    FilterLadder() : cord1(modMixer, 0, wave, 1)
    {};
    void noteOn(int MIDInote, int MIDIvel, int chan){};
    void noteOn(float freq, float vel, int chan){};
    void setParam(int param, float val)
    {
      enum param_
      {
        cutoff,
        resonance,
        octControl,
        bandPassGain,
        drive,
        modCV,
        modENV,
        modLFO,
      };

      switch(param)
      {
        case cutoff:
          wave.frequency(val);
          break;
        case resonance:
          wave.resonance(val);
          break;
        case octControl:
          wave.octaveControl(val);
          break;
        case bandPassGain:
          wave.passbandGain(val);
          break;
        case drive:
          wave.inputDrive(val);
          break;
        case modCV:
          modMixer.gain(0, val);
          break;
        case modENV:
          modMixer.gain(1, val);
          break;
        case modLFO:
          modMixer.gain(2, val);
          break;
      }
    }

    void noteOff(void){};
    bool isPlaying(void) {return true;};

};

Also, I want a mixer for several sources to modulate the pitch and the shape of the oscillator,
Does those 8 audioConnections need to be declared in the synthVoise class? or is there maybe an alternative method for that? like specifying the number of inputs in the synth/effect itself, not in the synthVoise class

thanks in advance
 
Bit difficult to advise without being sure of what you’re aiming for, but I’ll give it a go! A screenshot of the Audio Design Tool with a (simplified?) example of what you might want could possibly help.

When I’ve made “voice” type classes I’ve so far tended to make internal objects public, which isn’t very good from the pure C++ abstraction point of view but does make it easier to be flexible when writing a sketch around them! It also saves having to write things like your setParam() functions, which seem to me to add an unnecessary layer and just exchange familiar access functions for unfamiliar (except to you) enum values. But it’s quite likely I’m wrong - I started out my programming career writing in 68000 assembly language with no memory management, so I’m a dinosaur in that regard.

There’s definitely no hard and fast rule for where to define your AudioConnection objects for a voice class. However, it probably makes most sense to define them only for inputs and internal connections. Internal are obvious - the voice won’t work without them! I say inputs, because each input can only ever have one connection, whereas outputs can be connected to many inputs, so you don’t know how many output connection objects a voice might need. Against that, you may well have more connection objects than you use, which is a bit of a waste of memory; then again, connections don’t use much so it probably doesn’t matter.
 
What I'm working on it's a 8 channels synth/multi effects unit so I have not too much made in the audio design tool,
In the previous version of the code I had all audio items declared at the beginning (8 delays, 8 filters, 8 folders...), changing only the connections between them (using list) is how the effects were arranged in a chain,
but that has some obvious drawbacks.

I was rewriting it using your dynamic library and synthVoice class mostly because that way much more complicated effects could be made (like a vocoder with lots of filters & vcas) without declaring hundreds of audio objects at the start, plus recovering the memory when an effect it's discarded from a chain.
In the code I shared previously it could make not much sense because that is a simple oscillator and a simple filter only with some extra mixers for the mod controls etc ...but with others, like a mod delay (which includes an LFO, delay, filter and mixer objects) I think it does,

and yes, I have to use those enums to identify the parameters but that is not too much problem for me, the question is: how to connect 2 (or more) synthVoices in parallel?
can do it only to the audio objects declared at the beginning:
Code:
    waves[0][0] = new WaveformModulated;
    waves[0][0]->setParam(VCO_WAVEFORM_, 0);
    waves[0][0]->setParam(VCO_FREQ_, 1000.0);
    waves[0][0]->setParam(VCO_AMP_, 1.0);
    waves[0][0]->connect(tdmOut, 0);
    waves[0][0]->connect2(lfo[1], 0);

maybe there's other way to 'pack' some simple audio objects to make a more complex one, but I don't know how to, for that reason was asking.
 
I’m not saying you have to design using the tool, just use it to provide an explanatory drawing of what you want to achieve. manicksan’s GUI++ might work even better as he has a ”group” object to show how you’d expect to package up a set of objects together.

Making a class is probably a good approach, and connecting an output to two or more class inputs, and wiring each of their outputs to a mixer is probably the way to “connect in parallel”, but it’s harder to tell from words than a diagram…
 
Here are a couple of images which may be the sort of thing you want to do? I expect the internal topologies of the effects are completely wrong, but they're just intended as examples!

Parallel connection:
2022-09-17 12_54_46-Audio System Design Tool++ for Teensy Audio Library.png

Series connection:
2022-09-17 12_45_08-Audio System Design Tool++ for Teensy Audio Library.png

(These were input using manicksan's Audio System Design Tool++ - see https://forum.pjrc.com/threads/65740-Audio-System-Design-Tool-update?highlight=gui++)

The green "group" boxes are the equivalent of effects classes, each of which can derive from your base synthVoice class. As discussed above, it seems to make sense for each class to provide input AudioConnection objects, so every connection going into the left-hand side of the group is actually part of that class. The audio path highlighted in orange is thus the same object for both topologies (it's shown in 3 sections for the series version, but it's actually just one entity).

In the ModDelay class I've put an amp object, just to make it more convenient to connect into it using a single connection; if you provide a class function to make / break the audio input path then it's not needed, but it costs very little in terms of memory or CPU time, assuming its gain is set to 1.0.

I've shown the finalMixer as a separate entity - that would obviously be part of your output stage, which could be a class or not.
 
thanks h4yn0nnym0u5e,
the second one is like the thing I'm working on. A device in which different instruments can be selected as sound source and different effects can be chained to process it.
I had a look few days ago to that alternative audio design tool but it does not work fine in my computer (macOS Sierra...pretty old OS)

anyway I solved the connection issue by making public the getOutputStream() and removing the input and output cords,
now I can connect 2 or more SynthVoice objects in series, and also, if an input is a mixer there's access to all input channels without declaring several audioConnections in the SynthVoice class
Code:
class SynthVoice
{
  public:
  virtual AudioStream& getOutputStream(void) = 0;
  virtual AudioStream& getInputStream(void) = 0;
  virtual AudioStream& getControlStream(void) = 0;
  virtual AudioStream& getControlStream2(void) = 0;

    virtual ~SynthVoice(){};
    virtual void setParam(int param, float val) = 0;
    virtual bool isPlaying(void) = 0;
};

Still have to do all those 'enum' to set the parameters because I haven't find the way to make public the AudioStream objects (oscillators, filters etc) but that is not too bad for me
Code:
class WaveformModulated final : public SynthVoice
{
    AudioSynthWaveformModulated wave;
    AudioMixer4 modMixer;
    AudioMixer4 shapeMixer;
  
  static short wave_type[6];
      
  private:
    AudioConnection cord1;
    AudioConnection cord2;
  
  public:
    AudioStream& getInputStream(void){};
    AudioStream& getOutputStream(void) {AudioStream& result {wave}; return result;};
    AudioStream& getControlStream(void) {AudioStream& result {modMixer}; return result;};
    AudioStream& getControlStream2(void) {AudioStream& result {shapeMixer}; return result;};
    WaveformModulated() : cord1(modMixer,wave) , cord2(shapeMixer,0,wave,1)
    {};
    void setParam(int param, float val)
    {
      enum param_
      {
        waveform,
        frequency,
        amp,
        frequencyModulation,
        phaseModulation,
        modCV,
        modENV,
        modLFO,
        shapeCV,
        shapeENV,
        shapeLFO,
      };

      switch(param)
      {
        case waveform:
          wave.begin(wave_type[short(val)]);
          break;
        case frequency:
          wave.frequency(val);
          break;
        case amp:
          wave.amplitude(val);
          break;
        case frequencyModulation:
          wave.frequencyModulation(val);
          break;
        case phaseModulation:
          wave.phaseModulation(val);
          break;
        case modCV:
          modMixer.gain(0, val);
          break;
        case modENV:
          modMixer.gain(1, val);
          break;
        case modLFO:
          modMixer.gain(2, val);
          break;
        case shapeCV:
          shapeMixer.gain(0, val);
          break;
        case shapeENV:
          shapeMixer.gain(1, val);
          break;
        case shapeLFO:
          shapeMixer.gain(2, val);
          break;
      }
    }
};
 
Great, glad you've got it working OK for you.

You could try reporting your issues with GUI++ on that thread, but I've not seen manicksan active recently, and also have no idea if he has access to a Mac to test on. I don't, so can't be much help there either...

One reason to have the AudioConnections in the class is that they automatically get deleted when a class instance is deleted, but so long as you can keep track of them it really doesn't matter how you do it.

The default access specifier is "private", so in your class everything in orange is private (and the access specifier you put in is redundant...), and the members in green after the public access specifier are of course public:
Rich (BB code):
class WaveformModulated final : public SynthVoice
{
    AudioSynthWaveformModulated wave;
    AudioMixer4 modMixer;
    AudioMixer4 shapeMixer;
 
  static short wave_type[6];
     
  private:
    AudioConnection cord1;
    AudioConnection cord2;
 
  public:
    AudioStream& getInputStream(void){};
    AudioStream& getOutputStream(void) {AudioStream& result {wave}; return result;};
    AudioStream& getControlStream(void) {AudioStream& result {modMixer}; return result;};
    AudioStream& getControlStream2(void) {AudioStream& result {shapeMixer}; return result;};
    WaveformModulated() : cord1(modMixer,wave) , cord2(shapeMixer,0,wave,1)
    {};
    void setParam(int param, float val)
    {
      enum param_
      {
        waveform,
        frequency,
        amp,
        frequencyModulation,
        phaseModulation,
        modCV,
        modENV,
        modLFO,
        shapeCV,
        shapeENV,
        shapeLFO,
      };

      switch(param)
      {
        case waveform:
          wave.begin(wave_type[short(val)]);
          break;
        case frequency:
          wave.frequency(val);
          break;
        case amp:
          wave.amplitude(val);
          break;
        case frequencyModulation:
          wave.frequencyModulation(val);
          break;
        case phaseModulation:
          wave.phaseModulation(val);
          break;
        case modCV:
          modMixer.gain(0, val);
          break;
        case modENV:
          modMixer.gain(1, val);
          break;
        case modLFO:
          modMixer.gain(2, val);
          break;
        case shapeCV:
          shapeMixer.gain(0, val);
          break;
        case shapeENV:
          shapeMixer.gain(1, val);
          break;
        case shapeLFO:
          shapeMixer.gain(2, val);
          break;
      }
    }
};
So you could in principle just move the AudioStream objects after the public access specifier to make them visible to your code.
 
Last edited:
To pinpoint the problem u have with Mac I need to know which browser and version that u use, also what the problems are? You can post on the GUI++ thread. I only have a VM with the latest Mac os installed.
 
To pinpoint the problem u have with Mac I need to know which browser and version that u use, also what the problems are? You can post on the GUI++ thread. I only have a VM with the latest Mac os installed.

thanks @manicksan but it does not worth, my macOS it's too old, I'm getting advices from Chrome and VSCode about the imminent lack of compatibility, time to update I guess
 
One reason to have the AudioConnections in the class is that they automatically get deleted when a class instance is deleted, but so long as you can keep track of them it really doesn't matter how you do it.
yes, I'm using <list> like described in this post

So you could in principle just move the AudioStream objects after the public access specifier to make them visible to your code.


I have access to the enum (once moved it out of the setParam declaration):
Code:
waves->setParam(SynthWaveformModulated::frequency, floatVar);
...but no idea how to set the frequency directly accessing to the filter objet, moved it to the public section but din't figured out how to. Do you have any suggestion?
 
I have access to the enum (once moved it out of the setParam declaration):
Code:
waves->setParam(SynthWaveformModulated::frequency, floatVar);
...but no idea how to set the frequency directly accessing to the filter objet, moved it to the public section but din't figured out how to. Do you have any suggestion?
Not 100% sure what you're aiming for, as you've not included your most recent code, and your older examples don't have a filter object.

I've made what I'd guess are something like your changes, specifically making the internal audio objects public, and moving the enum. But bear in mind this isn't tested or testable, because it's not a complete program:
Rich (BB code):
class WaveformModulated final : public SynthVoice
{
  public:
    AudioSynthWaveformModulated wave;
    AudioMixer4 modMixer;
    AudioMixer4 shapeMixer;
 
  static short wave_type[6];
     
  private:
    AudioConnection cord1;
    AudioConnection cord2;
 
  public:
    AudioStream& getInputStream(void){};
    AudioStream& getOutputStream(void) {AudioStream& result {wave}; return result;};
    AudioStream& getControlStream(void) {AudioStream& result {modMixer}; return result;};
    AudioStream& getControlStream2(void) {AudioStream& result {shapeMixer}; return result;};
    WaveformModulated() : cord1(modMixer,wave) , cord2(shapeMixer,0,wave,1)
    {};

    enum param_
      {
        waveform,
        frequency,
        amp,
        frequencyModulation,
        phaseModulation,
        modCV,
        modENV,
        modLFO,
        shapeCV,
        shapeENV,
        shapeLFO,
      };

    void setParam(int param, float val)
    {
      switch(param)
      {
        case waveform:
          wave.begin(wave_type[short(val)]);
          break;
        case frequency:
          wave.frequency(val);
          break;
        case amp:
          wave.amplitude(val);
          break;
        case frequencyModulation:
          wave.frequencyModulation(val);
          break;
        case phaseModulation:
          wave.phaseModulation(val);
          break;
        case modCV:
          modMixer.gain(0, val);
          break;
        case modENV:
          modMixer.gain(1, val);
          break;
        case modLFO:
          modMixer.gain(2, val);
          break;
        case shapeCV:
          shapeMixer.gain(0, val);
          break;
        case shapeENV:
          shapeMixer.gain(1, val);
          break;
        case shapeLFO:
          shapeMixer.gain(2, val);
          break;
      }
    }
};
With those changes made, this shows two ways of setting the wave frequency that I think should be equivalent:
Code:
WaveformModulated wfm;

wfm.setParam(WaveformModulated::frequency, 220.0f);
wfm.wave.frequency(220.0f);
You could then dispense with the setParam() function, and the enum. You might want to have another enum which denotes the CV, ENV and LFO mixer channels, so you don't have to remember the "magic numbers" for them:
Code:
class WaveformModulated final : public SynthVoice
{
  public:
    enum {CV,ENV,LFO}; // 0, 1, 2
   ... rest of class
}

WaveformModulated wfm;

wfm.shapeMixer.gain(WaveformModulated::CV, 0.1f);
wfm.modMixer.gain(WaveformModulated::LFO, 0.25f);
 
Last edited:
Back
Top