AudioEffectsDelayStereo_F32

tigger

Active member
First i'd like to say, this library is really cool!
here's a link to the github (in case you haven't seen it)
https://github.com/hexeguitar/hexefx_audiolib_F32

But I have a couple of questions regarding the AudioEffectsDelayStereo_F32.
1st question: why in time() does it square "t"?

Code:
void time(float t, bool force = false)
    {
        t = constrain(t, 0.0f, 1.0f);
        t = t * t;    
        t = map(t, 0.0f, 1.0f, (float32_t)(dly_length-dly_time_min), 0.0f);
        __disable_irq();
        if (force) dly_time = t;
        dly_time_set = t;
        __enable_irq();
    }

This seems to make the scaled time incorrect. That is, if you have max delay set to 1000 ms, and you set t to 0.5 you get about 250 instead of 500ms.

2nd question: can you point me in the right direction on how I might edit the delay so that I could turn off the ping pong and just have it delay each channel separately, applying the other features to each channel separately. I'd like to add a function to do this, but i can't quite follow what the ping pong code does.

@Pio I guess this post is really directed towards you, but maybe someone else has experience with this.
 
explained in the comment directly above that function. What it takes as delay time parameter is not the time in ms, but scaled to 0.0-1.0 range value which is mapped to the max available delay length. It's just a design approach which i use in many components. Makes life easier when building more complex user interfaces, preset systems etc. I don't have to remember what are the min/max values for each parameter, just send a 0-1value and let the component scale it properly.
t is squared, that is a cheap kind of logarithmic pot taper emulation. It makes the time control much finer at lower values (chorus/flanger range).

Re 2)
start here.
 
I figured the t*t had to be something like that. Thanks.

I'm still decoding the second part, but I'll get there--eventually :)
Thanks
 
Back
Top