precise sample delay for Karplus-Strong-Algorithm

Status
Not open for further replies.

MorrisMC

Member
Hello,

this is my first forum post, so first off all I'd like to say "thank you" for your awesome Teensy Products and the great audio library. I've done a lot of sound projects so far and had a lot of fun with it!

For my current project I'd like to do some physical modeling with Teensy and the audio adaptor. My first tryout was a simple Karplus-Strong-Algorithm (not the 100% original) to synthesize a plucked string. Which worked and sounded well so far, see my block diagram:

Bildschirmfoto 2016-03-01 um 17.17.58.png

To adjust the correct pitch in Karplus-Strong you have to set the length of the delay line. The number of delayed samples is N = Fsample / Fpitch. For the note A4 = 440Hz @ Fs=44100 the delay length should be 100 samples, but the resulting pitch is much lower. After some research I figured out, that even when I set the delay on 0.0ms it produces delay. Am I doing something wrong? If I set the delay on 0.0ms my Karplus-Strong produces a pitch of almost the note F4, wich is about 340Hz.

Is it possible to delay samples very precisely or even a one sample delay with the delay-object?

If not, is it possible to modify or write a new audio object to do this?

Thanks so much!
 
Backwards connections have 2.9ms delay. You might have 2 backwards connections, depending on whether the export is putting the delay before or after the biquad filter.

Karplus-Strong probably wants to be implemented inside the library, but this is certainly a good way to experiment, as long as you can live with the 128 sample delay of one backwards connection.
 
Thanks for your reply! Just curious, what's the reason for the 128 sample delay on backward connections? Unfortunately I can't live with that delay, otherwise it wouldn't be possible to produce higher pitches than 344Hz with the System-Designer-Karplus-Strong.

So the only way would be a full implementation of Karplus-Strong in a new Audio Object? I'am relatively new to C++, I have programmed some delay lines in C++ with the JUCE Framework as a VST-Plug in. So the update() function is where the signal processing happens, right?
 
So the 128 sample delay is just on backward connections, how does the library know that it's backward? Is there any chance of modifying the AudioStream class to set this to (almost) zero?

My plan is to write very simple new audio objects. But I'd like to use existing objects, e.g. like filters, to avoid implementing them again directly in the library code, which would make it a lot easier for me. My dream would be to experiment with delay lines and feedback loops to realize waveguide models.
 
how does the library know that it's backward?

Look at the generated code. The order the objects are created matters.

The connection objects have 4 parameters, first 2 are the source, the other 2 are the destination.

When the destination appears after the source in the list of objects, there's no delay. That's the way things normally work out. Any connection that loops back to an input on the same object, or any prior object in the list, will cause a 128 sample delay.


Is there any chance of modifying the AudioStream class to set this to (almost) zero?

You could find and edit AudioStream.h and try editing the block size. Some parts of the library will break if you change the block size to anything other than 128, but a lot will adapt. Many enough will still work to be useful?

The block size should be a multiple of 16, since some objects have loops that process 8 or 16 samples at a time. If you go break those objects, they'll corrupt memory and the library is likely to crash in bad ways.
 
That reply was fast, thank you! I lowered the block size, it worked out. That's very helpful for quick experiments with very short delay lines (and feedback paths). I have to dig deeper into developing new objects in future anyway.
 
Great. If you make any progress, I hope you'll consider sharing.

Karplus-Strong and other physical model synthesis have been on my long-term todo list for the audio library.
 
On this topic, I am wondering how rapidly the biquad coefficients can update? Do the coefficients have linear interpolation or anything to avoid popping? How about clearing them if something goes haywire, is there a clear command or would setting the coefficients to 0 for a moment do that?

I would like to try implementing some of the resonant filters I've been testing in Audulus for making a modal based physical model synth. The cool thing about that technique is you can create an array of filters that ring out different frequencies, which can emulate many drums and struck instruments. I've been reading up and learning the details of physical modeling for the past couple months and would be totally open to sharing that info for library object creation.

Also re: karplus delay object creation, I think it might be easy enough to copy paste two library objects into a new one and make them one thing, no?
You may also want to consider using the faster filter object, since biquad coefficients take 5 different complex floating point calculations.
 
I am wondering how rapidly the biquad coefficients can update?

Instantly, but only at 128 sample block boundaries.

Do the coefficients have linear interpolation or anything to avoid popping?

Nope. But the filter state is zeroed, which pretty much guarantees a loud pop. After Mykle's experience, I'm probably going to remove that.

How about clearing them if something goes haywire, is there a clear command or would setting the coefficients to 0 for a moment do that?

I recently asked about the stability concerns on Columbia's Music-DSP mail list. Many people replied with references to papers, which I haven't had time to read yet. Realistically, I probably won't even touch this for 6 months to a year. After the prop shield, the K66 Teensy is top priority, then an overhaul/update of the website & documentation.

I would like to try implementing some of the resonant filters I've been testing in Audulus for making a modal based physical model synth. The cool thing about that technique is you can create an array of filters that ring out different frequencies, which can emulate many drums and struck instruments. I've been reading up and learning the details of physical modeling for the past couple months and would be totally open to sharing that info for library object creation.

That sounds pretty awesome! I'm really curious to hear how it goes.

I can tell you I'm planning to include a Karplus-Strong object in the library within the next month or two. At least a few people have said they're working on this. If any mature and are contributed, I'd love to use them. If not, the algorithm is fairly simple. I imagine I'll do it in a few days, once I actually have a few days to dedicate, if nobody else contributes first.
 
Hello again,

to firstly understand more about waveguide theory I practiced with Max/Msp and the gen~ patcher and examined the Stanford STK objects. Now I'd like to retry implementing waveguide models on my Teensy. I'am relatively new to C++ and have a really hard time to understand what's going on in the update() function. Maybe my problem is dealing with the 16-bit integer samples... So far I have experience with Frameworks like JUCE or the STK. E.g. in JUCE there's a process() function similar to update which is dealing with float buffers, so dsp calculations are easy to deal with.

I would be very happy if someone has helpful tutorials, hints or a very simple audio object for me to learn from. Best would be a very simple "gain knob" object.

Also a very interesting aspect would be, I think it was discussed in this forum before, if someone was able to implement C++ Code generated from the Heavy-C compiler out of PD? I played around with Heavy-C for just one day and it was, again, simple with JUCE. The heavy_process function expects float values as well. I was not successful implementing this function in an audio library object. Here are more information about heavy-c https://enzienaudio.com/docs/c.html sounds very interesting.

Thanks for any help!
 
Last edited:
... have a really hard time to understand what's going on in the update() function.

The audio library will call your update() function every 128 samples. For a synthesis object, you'll allocate() a new block, fill it with 128 samples, then transmit() and release().

Whatever data you write into those 128 samples will be sent to the rest of the audio lib, which presumably connects to an output object so you can hear it. The library takes care of all the connection stuff between objects, so all you have to do is create your 128 samples and transmit them. Don't forget to release the block after you've transmitted, since transmitting doesn't release your hold on the memory.

More details here:

http://www.pjrc.com/teensy/td_libs_AudioNewObjects.html
 
If anyone's still watching this old thread and interested in Karplus Strong, today I started an implementation in the audio library.

https://github.com/PaulStoffregen/Audio/blob/master/examples/Synthesis/Guitar/Guitar.ino

It's still very much a work-in-progress, but actually sounds pretty decent.

Paul,
I'm still watching the thread! The example sounds great. I went though your (wonderful) audio adapter tutorial with my son and we are planning to write our own Karplus Strong object. Now can use yours as a guide.
Thanks,
SteveC
 
Wow! Just runned this code on my new teensy board and it sound great!
would be possible to add a push button to trigger the note and tune the frequency with a pot? I'm very new to this, sorry for my n00bness :)
 
would be possible to add a push button to trigger the note

Yes. In fact, I made a touch sensing guitar for Maker Faire. Code is here:

https://github.com/PaulStoffregen/TouchGuitar

and tune the frequency with a pot?

Yes, you can use analogRead() and then some equations to scale 0-1023 to the frequency you want.

Currently the Karplus Strong object uses the same frequency you give at noteOn(), so you can't change it after the "string" is vibrating. I'm planning to add an aftertouch API to reassign the frequency without starting a new note or string "pluck", but so far no work has been done on that. The aftertouch API will allow simulating the behavior where a musician touches other frets while the strings vibrate, without plucking or strumming them. Contributions welcome.....
 
Status
Not open for further replies.
Back
Top