need help with audio latency and noteFreq object

Status
Not open for further replies.

emmanuel63

Well-known member
Hello,

I have build a synth that use an envelope follower to trigger notes. The prototype gets the envelope of an acoustic instrument (a flute and a trumpet in my case), analyse the frequency of every note with the noteFreq object, and triggers the synth section with the right note and velocity. It works pretty well and I am quite happy with the result, but I would like to have less latency. When playing fast and staccato, the latency is noticeable.

As regards the noteFreq object, I edit the AUDIO_GUITARTUNER_BLOCKS parameter and it helped a lot. But there is still some latency. I tried to edit the AUDIO_BLOCK_SAMPLES in the audioStream library. It helps with the majority of the audio objects, but unfortunately the noteFreq object doesn't support changing this setting. Is there a way to make it work when changing this parameter ? Is there other ways I could try to improve latency ?

Emmanuel

PS : there is an old thread about this, but I preferred to start a new one to give more explanations.
 
As regards the noteFreq object, I edit the AUDIO_GUITARTUNER_BLOCKS parameter and it helped a lot. But there is still some latency. I tried to edit the AUDIO_BLOCK_SAMPLES in the audioStream library. It helps with the majority of the audio objects, but unfortunately the noteFreq object doesn't support changing this setting. Is there a way to make it work when changing this parameter ? Is there other ways I could try to improve latency ?
What teensy are you using?

What is the lowest frequency you are expecting to measure?

How many AUDIO_GUITARTUNER_BLOCKS do you use?
 
Hi Duff,
Thank you for answering.
I use Teensy 3.6 + audio shield.
I have set AUDIO_GUITARTUNER_BLOCKS to 3. That's the minimum for the range of the flute.
Latency is low, but when playing fast, it's quite difficult for the musician to be accurate with the rhythm.

I run my tests with a most basic code to be sure nothing is interfering.
 

Latency is low, but when playing fast, it's quite difficult for the musician to be accurate with the rhythm.
Not sure what you mean, so the latency is good but when the player is double timing it the algorithm can't keep up? Have you tried adjusting the threshold value? A smaller value will make the algorithm more precise and more selective in what it considers a valid result. Do small adjustments like +-.01 from the examples value.
 
I was not clear, sorry. I mean when the flute player plays low pace tunes, the latency is note a problem. But when he plays high tempo melodies, with a lot of notes, then the latency becomes a problem. It gets difficult for the musician to keep playing naturally because the small delay due to latency gets noticeable.
Adjusting the threshold is a good suggestion. It has also an effect on latency. I will definitely use it and trade precision with latency.
Thanks for the advice. By the way , this library is really amazing. I will send you link to a video very soon.

Defragster, I am sorry for re-posting. I am quite new to the forum. I will stick to old threads now.
 
@emmanuel63 - no apology needed. Just a note that may help get attention and understanding for issues. New threads for new issues are great - but for follow on for the same issue having the thread's prior context helps with continuity for those that can help. And a new post is as good as a new thread when new info is added.

Good luck with the freq finding. How many notes can a flute play at once? How many noted in a second with higher tempo?
 
That's fine. I'll stick to that.

The flute is monophonic. It can play very fast. 8 notes per second is common.
 
Do you do any filtering before feeding it to notFreq? That could help if the mic is picking up a lot of outside or internal noise. Also I found in my guitar tuning application I got better results if I low passed the signal close to the frequencies I was looking for, so for guitar tuning I set the cutoff above High Open E on the guitar. Maybe a band pass filter in the flute range could help just give yourself some head room on either side.
 
Hi Duff,
Yes I do some filtering. I wanted this project to work on stage, so one big issue was feedback. Filtering helps a lot with feedback. The idea is to use the builtin SGTL5000 equalizer. It is very efficient and so easy to set up. Mic choice is also important. The mic must be directive and not too sensitive. Usual SM57 or SM58 with efficient windshield do the job quite well.
I have (almost) finished this project. It is really fun to play with. Still some latency, but acceptable for me. I will post a video very soon.
I have also added a MIDI output. It's working. I need to make some adjustments to have a good envelope to velocity mapping. RealTime audio to MIDI, it's really cool !
Thanks for making this amazing object.
 
Hi Duff,

Here is a small video to present the result. It is working. There is still some latency but it is "playable" on stage. Strong filtering helps to get a better tone detection and feedback rejection on stage.
Thanks again for making the noteFreq available.
Emmanuel

video Youtube
 
Given the results I have had so far doing this using recordings of instruments from youtube, that's pretty amazing performance. Will you be offering the synth for sale or sharing the sketches?
 
If I have to sell it, it's going to be very expensive !! I am kidding of course ;)

The code wouldn't help anybody because there are a lot of variables that refer to hardware : multiplexers, pots, switches...
But I can certainly help by giving some chunks of the code I used. I will do it on this thread very ASAP. My code is basic. As I said I used and adapted the Teensy examples.

Emmanuel
 
Here is the main code (a readable version) :

Code:
void input_follower() {

  //peak detection
  if (rms1.available()) {
    input_level = rms1.read();   
  }

  //freq detection
  if (notefreq1.available()) {
    proba =  notefreq1.probability();
    freq = notefreq1.read();

    if (proba > 0.7 && input_level > input_threshold ) {
      waveformMod1.frequency(freq * osc1_oct * transpose);
      waveformMod2.frequency(freq  * osc2_oct * detune);
      waveformMod3.frequency(freq  * osc3_oct);

      dc1.amplitude(input_level);
    }
  }


  //NOTE OFF
  if (input_level < input_threshold) {
    dc1.amplitude(0);
  }

}

Emmanuel
 
Status
Not open for further replies.
Back
Top