How to instanciate AudioConnection with in a class?

Status
Not open for further replies.

apiel

Member
I am trying to create a reusable class synth that would build the connection between my different audio objects. But when I try to instance the AudioConnection within this class it doesn't work :-/ (seem I reached my knowledge in C++ :p )

Code:
AudioOutputMQS audioOut;
Synth synth{&audioOut};

class Synth {
   protected:
   public:
    AudioConnection patchCord01;
    AudioConnection patchCord02;
    AudioConnection patchCord03;
    AudioConnection patchCord05;
    AudioConnection patchCord06;

    AudioSynthWaveformDc dc;
    AudioEffectEnvelope envMod;
    AudioSynthWaveformModulated lfoMod;
    AudioSynthWaveformModulated waveform;
    AudioEffectEnvelope env;

    byte currentWaveform = 0;

    float attackMs = 0;
    float decayMs = 50;
    float sustainLevel = 0;
    float releaseMs = 0;

    float frequency = 440;
    float amplitude = 1.0;

    Synth(AudioStream* audioDest)
        : patchCord01(lfoMod, waveform),
          patchCord02(dc, envMod),
          patchCord03(envMod, 0, waveform, 1),
          patchCord05(waveform, env),
          patchCord06(env, *audioDest) {
        waveform.frequency(frequency);
        waveform.amplitude(amplitude);
        waveform.arbitraryWaveform(arbitraryWaveform, 172.0);
        waveform.begin(WAVEFORM_SINE);

        lfoMod.frequency(1.0);
        // lfoMod.amplitude(0.5);
        lfoMod.amplitude(0.0);
        lfoMod.begin(WAVEFORM_SINE);

        env.attack(attackMs);
        env.decay(decayMs);
        env.sustain(sustainLevel);
        env.release(releaseMs);
        env.hold(0);
        env.delay(0);

        dc.amplitude(0.5);
        envMod.delay(0);
        envMod.attack(200);
        envMod.hold(200);
        envMod.decay(200);
        envMod.sustain(0.4);
        envMod.release(1500);
    }

    void noteOn() {
        envMod.noteOn();
        lfoMod.phaseModulation(0);
        env.noteOn();
    }

    void noteOff() {
        env.noteOff();
        envMod.noteOff();
    }

    // and some more function to control everything...
};

// Uncommenting the following code would make the whole thing working
// But i want the connection happening within the synth class
//AudioConnection patchCord01(synth.lfoMod, synth.waveform);
//AudioConnection patchCord02(synth.dc, synth.envMod);
//AudioConnection patchCord03(synth.envMod, 0, synth.waveform, 1);
//AudioConnection patchCord05(synth.waveform, synth.env);
//AudioConnection patchCord06(synth.env, audioOut);

I actually beleive that I could even create a new audio object like describe here https://www.pjrc.com/teensy/td_libs_AudioNewObjects.html but I have even less an idea about how to do this in order to connect all those sub audio object.

I am a bit lost and I would love to get some help to find my way.
 
Ok, by look at this post https://forum.pjrc.com/threads/6652...-Objects-into-One-Class?highlight=AudioStream, I already found one way to do it:

Code:
class Synth {
   protected:
   public:
    AudioConnection* patchCord[4];

    AudioSynthWaveformDc dc;
    AudioEffectEnvelope envMod;
    AudioSynthWaveformModulated lfoMod;
    AudioSynthWaveformModulated waveform;
    AudioEffectEnvelope env;

    Synth() {
        patchCord[0] = new AudioConnection(lfoMod, waveform);
        patchCord[1] = new AudioConnection(dc, envMod);
        patchCord[2] = new AudioConnection(envMod, 0, waveform, 1);
        patchCord[3] = new AudioConnection(waveform, env);
       
        // ...
    }
};

AudioOutputMQS audioOut;
Synth synth;
AudioConnection patchCord06(synth.env, audioOut);

But I still wonder if it would not be even better to make the Synth class extend AudioStream, so the synth would be an audio object like all the other object in the audio library. Would this be possible?
 
Ok I found one way, by extending AudioEffectEnvelope:

Code:
class Synth: public AudioEffectEnvelope {
   protected:
   public:
    AudioConnection* patchCord[4];

    AudioSynthWaveformDc dc;
    AudioEffectEnvelope envMod;
    AudioSynthWaveformModulated lfoMod;
    AudioSynthWaveformModulated waveform;

    Synth() {
        patchCord[0] = new AudioConnection(lfoMod, waveform);
        patchCord[1] = new AudioConnection(dc, envMod);
        patchCord[2] = new AudioConnection(envMod, 0, waveform, 1);
        patchCord[3] = new AudioConnection(waveform, *this);
       
        // ...
    }

    void noteOn() {
        Serial.println("note on");
        envMod.noteOn();
        lfoMod.phaseModulation(0);
        AudioEffectEnvelope::noteOn();
    }

    void noteOff() {
        Serial.println("note off");
        AudioEffectEnvelope::noteOff();
        envMod.noteOff();
    }
};

AudioOutputMQS audioOut;
Synth synth;
AudioConnection patchCord06(synth, audioOut);

It's getting better, but I still think it would be even better to extend directly AudioStream... I hope I will find a solution.
 
So for the moment I decided to create an AudioDumb class:

Code:
class AudioDumb : public AudioStream {
   public:
    AudioDumb(void) : AudioStream(1, inputQueueArray) {}
    virtual void update(void) {
        audio_block_t *block = receiveReadOnly();
        if (!block) return;
        transmit(block);
        release(block);
    }

   private:
    audio_block_t *inputQueueArray[1];
};

class Synth: public AudioDumb {
   protected:
   public:
    AudioConnection* patchCord[5];

    AudioSynthWaveformDc dc;
    AudioEffectEnvelope envMod;
    AudioSynthWaveformModulated lfoMod;
    AudioSynthWaveformModulated waveform;
    AudioEffectEnvelope env;

    Synth() {
        patchCord[0] = new AudioConnection(lfoMod, waveform);
        patchCord[1] = new AudioConnection(dc, envMod);
        patchCord[2] = new AudioConnection(envMod, 0, waveform, 1);
        patchCord[3] = new AudioConnection(waveform, env);
        patchCord[4] = new AudioConnection(env, *this);
       
        // ...
    }
};

AudioOutputMQS audioOut;
Synth synth;
AudioConnection patchCord06(synth, audioOut);

And I guess this is more or less the solution... Of course I could directly extend AudioStream from Synth and add the update method in my Synth class but at the end, maybe it's even better to have a reusable AudioDumb class.
 
And I guess this is more or less the solution... Of course I could directly extend AudioStream from Synth and add the update method in my Synth class but at the end, maybe it's even better to have a reusable AudioDumb class.

Don't think there is any special reason that you should need to do this, except when you are designing very specific audio objects.
You are only making it unnecessary complicated.

Otherwise using my modified tool is the easiest, as it makes it possible to draw and easily modify complex Voices directly in the tool.

You can also make the whole design in a modular way, see the example "Demo Flow A" it's a modular version of kd5rxt-mark:s design
"NodeArraySynthMain" is the main class
with "NoteGen" representing the Voice class

note. that it uses my modified "c++ template"-mixer
(which allows any number of inputs)

https://forum.pjrc.com/threads/60690-queued-TeensyMIDIPolySynth
kd5rxt-mark:s original design is called "Original Design" in the examples menu.


here is what the most simplest example exports:
Code:
// TeensyAudioDesign: begin automatically generated code
// the following JSON string contains the whole project, 
// it's included in all generated files.
// JSON string:[{"type":"tab","id":"f1d578c.a708688","label":"Voice","inputs":0,"outputs":0,"export":true,"isMain":false,"mainNameType":"tabName","mainNameExt":".ino","settings":{},"nodes":[{"id":"Voice_waveform1","type":"AudioSynthWaveform","name":"waveform","comment":"","x":133,"y":97,"z":"f1d578c.a708688","bgColor":"#E6E0F8","wires":[["Voice_Out1:0"]]},{"id":"Voice_Out1","type":"TabOutput","name":"Out","comment":"","x":313,"y":97,"z":"f1d578c.a708688","bgColor":"#cce6ff","wires":[]}]},{"type":"tab","id":"Main","label":"Main","inputs":0,"outputs":0,"export":true,"isMain":false,"mainNameType":"tabName","mainNameExt":".ino","settings":{},"nodes":[{"id":"Main_Voice1","type":"Voice","name":"voice","x":187,"y":103,"z":"Main","bgColor":"#CCFFCC","wires":[["Main_i2s1:0","Main_i2s1:1"]]},{"id":"Main_i2s1","type":"AudioOutputI2S","name":"i2s","comment":"","x":305,"y":105,"z":"Main","bgColor":"#E6E0F8","wires":[]}]},{"id":"Voice_waveform1","type":"AudioSynthWaveform","name":"waveform","comment":"","x":133,"y":97,"z":"f1d578c.a708688","bgColor":"#E6E0F8","wires":[["Voice_Out1:0"]]},{"id":"Voice_Out1","type":"TabOutput","name":"Out","comment":"","x":313,"y":97,"z":"f1d578c.a708688","bgColor":"#cce6ff","wires":[]},{"id":"Main_Voice1","type":"Voice","name":"voice","x":187,"y":103,"z":"Main","bgColor":"#CCFFCC","wires":[["Main_i2s1:0","Main_i2s1:1"]]},{"id":"Main_i2s1","type":"AudioOutputI2S","name":"i2s","comment":"","x":305,"y":105,"z":"Main","bgColor":"#E6E0F8","wires":[]}]

class Voice
{
public:
    AudioSynthWaveform               waveform;

    Voice() { // constructor (this is called when class-object is created)

        
    }
};

class Main
{
public:
    Voice                            voice;
    AudioOutputI2S                   i2s;
    AudioConnection                  *patchCord[2]; // total patchCordCount:2 including array typed ones.

    Main() { // constructor (this is called when class-object is created)
        int pci = 0; // used only for adding new patchcords


        patchCord[pci++] = new AudioConnection(voice.waveform, 0, i2s, 0);
        patchCord[pci++] = new AudioConnection(voice.waveform, 0, i2s, 1);
        
    }
};
// TeensyAudioDesign: end automatically generated code

The only downside is that when doing new AudioConnections the whole voice.waveform has to be used.
 
Thanks for your reply and your web interface https://manicken.github.io/ helped me a lot to understand.

The problem I see with not creating an audio object, is that the main process creating the Synth element need to know the last patchCord layer in order to link it to the audio output. So if I update the synth object I will need to update all the element using it. And as I want this Synth object to be reusable in different way, I feel like it is more reasonnable to make this Synth class an audio object extending AudioStream.

But I will still look again in details in all your examples, as I am just starting to use this lib, I might over complicate thing :p
 
of what can understand is that you want to have a constant name output node which just transfers the data from the input to the output
that is basically what you are doing right now
apiel_dummyOut.png

it's a good concept but add extra latency to the "signal"

and I have no idea how you would add inputs to the voice class

I have been thinkin about to make it possible to name specific AudioConnections to make dynamic connections easier
but then there is a "bug" that don't allow a connection to be disconnected from a node that have multiple destinations
so that functionality cannot be used for situations like that.
multipleDestinations.png


Then if you are thinking about functions,
then they can already be included in the exported class that I made with the tool.

You can also make code objects that allows coding direct in the Tool, using the built in Ace editor.
then when the "project" is exported it contains a complete project/sketch ready for compilation/upload.

I have created a thread about the modified Tool here:
https://forum.pjrc.com/threads/65740-Audio-System-Design-Tool-update
 
Status
Not open for further replies.
Back
Top