# Clicking envelope with Audio Library?

But the question remains, what *should* the envelope object actually do if you end the note and then immediately begin the next? Do you understand this question?

The envelope object is already passing the sine wave with the gain for sustain (which you set to 1.0). So if you end the note and begin another, what waveform do you expect to appear at the output?

I expect that restart the new note without the envelope ending of the previous (totally stopped)... and without click

How can i do this without waiting?

But what does "restart the new note" actually mean when the signal is already a maximum amplitude waveform? It's already turned up as loud as it can go. So what does starting a note sound like when the volume is already at max?

It must to goes down the amplitude of the previous note and start the new note...

I'm testing with fade object and a mixer object but the click are always present...

if you have a sustain 0.2 of previous and 0.2 of the new note (0.2 + 0.2 = 0.4) but the click are always present...

Try to use my code and try to have a sound without click, changing the envelope parameters... click is always present.

If there are a previous sound you have ALWAYS a click, for now there is only one way: more and more voices for the only one instrument...

If you have a new idea for solve this PROBLEM (because it's a problem) please let me know...

And if you think that's not a problem let me know a code with following notes without click so we are speaking about nothing...

It must to goes down the amplitude of the previous note and start the new note...

Can you explain how "goes down the amplitude of the previous note"? BE SPECIFIC!!!

The previous note is at maximum possible 1.0 amplitude at the moment you end it and immediately begin a new note.

Can you explain how "goes down the amplitude of the previous note"? BE SPECIFIC!!!

Is possibile to create a check in evnelopenoteon funtion that check if the object have a amplitude already on and set it to ZERO before start the next envelope note?

The previous note is at maximum possible 1.0 amplitude at the moment you end it and immediately begin a new note.

there is the same clip also if the amplitude is 0.1... you can try... i have already tested...

Is possibile to create a check in evnelopenoteon funtion that check if the object have a amplitude already on and set it to ZERO before start the next envelope note?

I have the feeling this discussion needs some interference.

@danixdj: can you draw on a sheet of paper exactly what you wanted to see
- two horizontal lines, top one indicating when you play a tone, bottom one expected amplitude.
- make a photo of it an load it up to the forum

I guess, drawing what you expect to see, seems easier than explaining it.

I have the feeling this discussion needs some interference.

@danixdj: can you draw on a sheet of paper exactly what you wanted to see
- two horizontal lines, top one indicating when you play a tone, bottom one expected amplitude.
- make a photo of it an load it up to the forum

I guess, drawing what you expect to see, seems easier than explaining it.

Ok, so this is the same condition by a commercial synthesizer... set to the exactly same of my code in teensy... (1 voice, sinusoidal 400hz, decay 200, sustain 1, release 200, attack 30)

this is the transition point from the first to second note..

And this is the audio:

https://www.dropbox.com/s/a129qyha0wc38kd/sinusoidal_commercial_synth.mp3?dl=0

There isn't a click, you can listen something like a kick but it's a in phase sound, acceptable and perfectly right... not a digital click!

I hope that all it's explain correctly

Last edited:
Ok, so this is the same condition by a commercial synthesizer... set to the exactly same of my code in teensy... (1 voice, sinusoidal 400hz, decay 200, sustain 1, release 200, attack 30)

great, this is what I was looking for (and I did expect this what you meant by saying 'without click').
But now it should be clear to the guys that can program the audio library.

Obviously, you realize that there is a minimal delay before starting a note to allow the amplitude of the previous note go to zero, otherwise you will always have a phase jump (which you may hear also as click). The commercial system you show attenuates the amplitude before sending out a new note, so the new note is delayed with respect programmed start of note.

Obviously, you realize that there is a minimal delay before starting a note to allow the amplitude of the previous note go to zero, otherwise you will always have a phase jump (which you may hear also as click). The commercial system you show attenuates the amplitude before sending out a new note, so the new note is delayed with respect programmed start of note.

Sure i know, but i think that it's totally imperceptible and the teensy library is very fast and responsive, using 64 sample block is about 6ms... there is the time for wait a correct evolution of phase

@danidxj:
So, amplitude going to zero within a data block (128 samples, 2.9 ms @44.1 kHz) would be OK?

E.g.: in reception of new note:
- send a block of old node with fast attenuation to zero
- send the new note.
Is that what you expecting?

I think yes... if it's possible to do it in a data block it's perfect!

So Paul is going to add another envelope section for stolen notes that will give a fast decay?

Sorry if I'm off base on any of this as I've not read the full thread.... these are just sundry comments.

Isn't part of the issue the OSC? Are we talking a single oscillator that is told to change period/freq. at note on? If so shouldn't it wait for a zero crossing and try to retain the phase as much as possible? If the envelope changed at the zero crossing I don't think you'd hear a click. Sample players would be trickier.

Sometimes you want note stealing without the voice breathing on you (dropping out at note on as the attack envelope is fired). String legato for example. You want the envelope staying at it's sustain or possibly re-firing the attack but outputting the max value between the old sustain and the new attack).

I learned what I know about synthesis from using a software package. It's voice management system was quite complex and there were a number of Boolean settings for things like voice stealing and they determined whether and how the envelope generators would respond to MIDI messages. (The one I used was SynthMaker (now FlowStone) and its 'MIDI to Voices' component is documented it it's component reference but I'm sure other ones like SynthEdit, Max and Reaktor have similar tools).

I don't think Paul's library is going handle as many of these issues as these software products anytime soon...

Finally... clicks are high-frequency content of very short duration. Cross-mixing two signals together for a transition or low-pass filtering/slew limiting the output from the transition point for a handful of samples can eliminate perceived clicks. I used the latter technique in a digital looper with instantly variable delay taps... I found I was getting clicks when I changed the tap point of a delay but if I imposed a sort of filter on the output for a few samples it got rid of them.

...hope this isn't too off topic.

So Paul is going to add another envelope section for stolen notes that will give a fast decay?

I'm not suggesting any 'another' method.
But I see that there are unpleasant artefacts (clicks) that merit attention.
In fact, I have no idea what so ever how MIDI and related instruments are designed. I which I knew more about it, also to avoid semantic confusion.

So Paul is going to add another envelope section for stolen notes that will give a fast decay?

I learned what I know about synthesis from using a software package. It's voice management system was quite complex and there were a number of Boolean settings for things like voice stealing and they determined whether and how the envelope generators would respond to MIDI messages. (The one I used was SynthMaker (now FlowStone) and its 'MIDI to Voices' component is documented it it's component reference but I'm sure other ones like SynthEdit, Max and Reaktor have similar tools).

We can speak about the solution for make a legato or a good sound when the one voice osc change note but the problem in this case is a digital artefact, not like a kick (natural sound for this event, see my preview post with example waves).

I have some suggestion for reduce the kick sound when the note change... but i haven't for the click..

First we must to solve the click and after we can work to the midi signals for a correct command of envelope functions

The envelope could have a slew limiter type filter applied to prevent it from altering too quickly. Then if there's a click remaining it's from the source signals not being in phase at the switch point.

That might be what Paul has in mind and he's asking what the tolerance for increased latency is.

Yes, that is a big question. If I change the envelope object to delay the attack phase, which is a pretty big "if" at this point, then the question of how much note-on latency can be acceptable is a big question.

But so far, I'm not feeling confident from the comments posted so far on this thread.

At the very least, before I do anything, I'll probably spend some time with a few friends into modular synthesis. Several ADSR eurorack modules are sold. I'll probably play with some of those and see what they do in these unusual circumstances.

If the intent is to mimic the output of a modular device it should have some kind of slew limit so not to impart audible frequencies when re-firing.

Or even if you just set a phase to zero values you could get 'click' like sounds from multiplying the envelope with a signal.

You would not need to delay the attack phase at all and if the slope of the attack is less than the your max slew rate it would not impact the attack phase where latency is most critical.

Maybe it could be an optional feature switched in code?

Chiming in here.......This is a real tough one.

My go to solution if I was authoring this would be to design in my own polyphony engine, and use two (or more) voices per instrument (Frank's suggestion basically). I could then adjust the envelopes of each voice according to events. So I could choose to let the envelope run on the previous note, or choose to trigger decay early, shorten the delay cycle, or simply cut it via muting etc - my choice. This way I could simulate most of the settings of any commercial synth. This is already possible with the audio library - but it is not simple.

Building in a single data-block stolen note solution would be a very cool feature - but how far do you go down this rabbit hole?

Is a polyphony engine a part of the audio library or an extension library? I'd suggest the latter. It's not a lightweight implementation - its a memory hogging high load solution. twice the mixers, twice the loading, fluctuating CPU load. The Teensy can do it no problem but it doesn't feel like it belongs as a part of the core library necessarily.

I like the idea of supporting stolen notes within the basic envelope system, it would benefit my projects nicely, and probably solve some of the artefacts I've noticed when finger drumming.

So I would support the implementation of a slightly improved stolen note solution, (for "monophonic" synths) but also i'd suggest that the line should be drawn there and further functionality afforded by a basic polyphony engine instead, offering total freedom.

A polyphony engine could be very simple indeed - but of course it features fixed resources so simply adding voices on the fly is not a simple matter programmatically. A 2 channel example ("duophonic" synths), and a 6 channel example would be a great starting point for creating more complex synths.

But when you start looking at the variance in synth designs (12 oscilaltor with octave dividers for example), voice allocation solutions (possible, but not easy at all for a basic coder), it's important NOT to try and build features from all of these synth types, into the core.

Chiming in here ....... This is a real tough one.

My go to solution if I was authoring this would be to design in my own polyphony engine, and use two (or more) voices per instrument (Frank's suggestion basically). I could then adjust the envelopes of each voice according to events. So I could choose to let the envelope run on the previous note, or choose to trigger early decay, shorten the delay cycle, or simply cut it through muting etc - my choice. This way I could simulate most of the settings of any commercial synth. This is already possible with the audio library - but it is not simple.

True Pensive but do you have an idea that what is it hard to make a real polyphonic synth or with OSC FM modulated or a bit complex that a single OSC? If we need more voices to make a monophonic (single voice) synth now ..how many voice (so mixer, oscillator, modulator +++++ and complex sketch) we need to make for a polyphonic synth?

This problem is a great handicap for a correct use and evolution of synthesis in Teensy ...

Yes, that is a big question. If I change the envelope object to delay the attack phase, which is a pretty big "if" at this point, then the question of how much note-on latency can be acceptable is a big question.

But so far, I'm not feeling confident from the comments posted so far on this thread.

At the very least, before I do anything, I'll probably spend some time with a few friends into modular synthesis. Several ADSR eurorack modules are sold. I'll probably play with some of those and see what they do in these unusual circumstances.

My teensy project have 8ms of latency... my commercial monophonic synth (moog) have 32 ms of latency... we have 24ms

True Pensive but do you have an idea that what is it hard to make a real polyphonic synth or with OSC FM modulated or a bit complex that a single OSC? If we need more voices to make a monophonic (single voice) synth now ..how many voice (so mixer, oscillator, modulator +++++ and complex sketch) we need to make for a polyphonic synth?

This problem is a great handicap for a correct use and evolution of synthesis in Teensy ...

You need two voices for a monosyth to do that.

For a polysynth design it could be done many ways - but you would probably need (Number of Voices +1) and use a stack design so you always have "Number of Voices" available and always be able to cleanly fade out the last voice.

It's been done many times before - just google "Teensy polyphonic synth". Here's an example polysynth solution on github: https://github.com/otem/teensypolysynth

However

I do agree that ultimately - being able to drag a Duosynth object, and a polysynth object onto the audio library wiresheet, and have it standlone with all the mixers encapsulated would be a lot more convenient.

But I must warn you - this is embedded development - as easy as it has been made by the excellent audio library - it is still bound by the hardware.

By stacking up mixers like that, into objects, you are going to lose your 8ms latency. It is not free -and when working on embedded hardware any given solution should be directly engineered as a whole. We do not have the luxury of power that synth design apps on Mac / Windows benefit from. With ease of use, comes performance hits.

That is why those objects are not there...YET. (Among other reasons of resource and demand!!)

It's getting a little bit confusing for me.
Is someone willing to make us(me) a crash course on what you are talking about?

OK, I tried to learn about it with http://electronicmusic.wikia.com/wiki/Voice etc. but the text is too short with a lot of hyperlinks , so somewhat difficult to follow/learn.

What I understood so far
- in ideal HW:
every key has a note associated and is connected to a voice circuit the sound of which which is the mixed together (either in HW or in air)
- in real HW:
there are only few voices circuits so there is a key to voice allocation. the strategy differs (round robin, LIFO, etc)
in case there are no more voice circuits available, some voice circuit must be interrupted (stolen)

The problem is then which one to choose and how to terminate the running voice circuit.

Is this understanding correct?

If yes, why does one needs in SW to steal a voice in the first place?
Is SW not ideal to allocate for all notes a voice circuit (aka voice object)
Is it then not only an issue on how to mix voices together?

you could then simply design a table driven mixer that combines all voices together.
Or I'm completely wrong with that?

It's getting a little bit confusing for me.
Is someone willing to make us(me) a crash course on what you are talking about?

WMX we aren't speaking about synth theory but we are speaking about a DIGITAL ARTIFACT CLICK (repeat: digital artifact clicking) when you repeat a note under the same OSC and envelope...

See the example change note situation in teensy and the correct envelope of a commercial synth.... all the other questions are out of topic... we can speak about all the questions but after than there aren't digital artifact click i think