Detection time for noteFrequency in teensy audio lib

Robbert

Active member
I'm making an installation in which two seperate microcontrollers send over data by using frequency values.
I've decided to use the notefreq (AudioAnalyzeNoteFrequency object) to do that detection. I want to sample the incoming 'beeps' over a period of time and then average those. Though when I'm doing that it will only take 3 samples in a 200ms window. So I figure the notefreq object actually does already what I'm trying to do, and toggles the notefreq.available() when it's done sampling over that time.

Is what i'm stating here the truth of how the object works? And if so, is there any way at which I can control that exact sampling window.

hpp file of the object;
https://github.com/PaulStoffregen/Audio/blob/master/analyze_notefreq.h

ccp file of the object;
https://github.com/PaulStoffregen/Audio/blob/master/analyze_notefreq.cpp

Best,
Robbert
 
It uses a type of autocorrelation (http://recherche.ircam.fr/equipes/pcm/cheveign/pss/2002_JASA_YIN.pdf) to detect the fundamental frequency of a signal mainly designed for music. The only tunable parameters in the audio library version is the number of audio blocks to measure and the threshold value which is how sensitive you want the detection part of the algorithm. The larger number of audio blocks you use the lower the fundamental frequency you can detect but it requires at least two full cycles of the fundamental frequency for it to detect, meaning the sampling theorem is in effect.

What kind of signal are you sending and how do you create these signals, are they sine waves of some sort? Also it will not save the samples you would have to do that yourself somehow.
 
Hey Duff,
Thanks for the answer, so far pretty much all code is in place, and saving and such. Basically one teensy works as a sender beeping of sine-waves in different frequencies. Those then get picked up by the other teensy receiver, which decodes those frequencies back into the values they should represent (note that this project is NOT about perfect transmission, even more so about the distortion of it). For now, I have a test setup in which the beeps each last 200ms. So preferable i'd like the receiving teensy to sample the sound for about 150ms of that (give it some slack), then average out those values and give me the guessed frequency. This is how it's implemented now, but it seems that in the 200ms of time, the audio only gets sampled 2 or 3 times. Before that, the notefreq1.available() == false.

So now i've implemented by own buffer that those values get saved to and then average those and such. But it would be even nicer if I could just synchronise the audio buffer in the object, instead of implementing it in a strange way on top of it.

Hope it makes some sense ;)
 
This is how it's implemented now, but it seems that in the 200ms of time, the audio only gets sampled 2 or 3 times. Before that, the notefreq1.available() == false
The audio library sample rate is ~44100 Hz so 200ms would have around 8828 samples of data or do you mean you only get 2-3 noteFrequency outputs? If the later what is the frequency range of sine waves you are sending?
 
I mean;

notefreq1.available()

is true only 2 to 3 times when i cycle through it, which is when I call notefreq1.read() and save that value.
I do have a 5ms delay in the loop at the moment. But I think that should interrupt the timers as well?

The frequencies that are being send are 85 - 400hz
 
The number of detections would be related to the frequency of the signal and the duration of the signal. A 85 Hz signal would at a very minimum take around 24 Audio Blocks of data to detect which is the default setting. The algorithm will process all 24 blocks before outputting anything to the user. 24 blocks is about 70ms of time so 200ms of data would be about two to three noteFreq1.read() outputs which you are seeing so that makes sense.

No need to delay the loop reading notefreq1.available() the library will just discard previous results if you miss them.
 
Okay, so I'm trying my best to thoroughly understand this ;)

The algorithm will ALWAYS calculate at least 24 blocks, using about 70ms. It will then output the number of frequency that it is most confident of being the sampled frequency.
In my case I start the object by using notefreq1.begin(0.5); in the setup, so it must do continuous processing. Though how would I go about sampling from a certain moment in time, can I just re-state .begin(0.5); to re-init the object with values. I do not want processing cycles to overlap as they might accidentally get the tone just-before they actually have to process.

I guess I have a hard time understanding the role of the audio blocks versus what the algorithms cycles do.

Also just wondering;
How do you calculate such things? How would I be able to figure out how many audioblocks I need for a certain hertz (other than hearing it from you ;) )

Thanks so much!
 
Okay, so I'm trying my best to thoroughly understand this ;)
No problem, I'll use the Audio library sample rate of 44117Hz, that means one sample takes around (1/44117) seconds and you will have 44117 samples in 1 second.

An Audio block is a collection of 128 samples, so if you take ((1/44117)*128)*1000 you get around 2.9ms per block. Multiplying by a thousand is because I convert into milliseconds from seconds. So now you know how long 1 block of 128 samples will take you can figure out how many blocks a signal of a certain frequency uses.

So for example you have 85 hertz sine wave being fed into the audio library. Going from frequency to time is 1/frequency = time, so 1/85(hertz) is 0.011764705882353(sec) and multiplying it by a thousand to get us milliseconds, this would be 11.764705882352941(ms) for a 85 hertz sine wave to complete one cycle.

Now take the signal cycle time ~11.7(ms) and divide it by block time of ~2.9(ms) to get how many blocks it will take (11.7/2.9), so 85Hz sine wave takes about 4 blocks to complete one cycle. I know it's a lot to take in but this is how I figure it out. Just make sure you keep the time components the same i.e. milliseconds.

Hope this helps?
 
The number of detections would be related to the frequency of the signal and the duration of the signal. A 85 Hz signal would at a very minimum take around 24 Audio Blocks of data to detect which is the default setting. The algorithm will process all 24 blocks before outputting anything to the user. 24 blocks is about 70ms of time so 200ms of data would be about two to three noteFreq1.read() outputs which you are seeing so that makes sense.
Correction: 24 Audio Blocks would be able to detect a signal down to around 29 Hz actually, my bad:confused:

85Hz would need about 10 Blocks for it to detected, this can be set here (https://github.com/PaulStoffregen/Audio/blob/master/analyze_notefreq.h#L40)

Setting less blocks means the algorithm will complete faster at the expense of not being able to detect lower frequencies though you should really test it with real known signals to dial it in.
 
Last edited by a moderator:
You might also try looking at File > Examples > Audio > Analysis > DialTone_Serial. It detects the 7 standard DTMF tones using the simpler AudioAnalyzeToneDetect object.
 
I apologize, you misunderstood my meaning.

I see that you are trying to use frequency detection. But for what purpose? Why is the frequency range 85-400 Hz? What should happen if the sender repeats a frequency? Are you trying to send/receive serial data?

A clearer explanation of what you are trying to do would be useful to someone who might help you, or might give somebody an idea for their own tinkering.
 
Last edited:
I apologize, you misunderstood my meaning.

I see that you are trying to use frequency detection. But for what purpose? Why is the frequency range 85-400 Hz? What should happen if the sender repeats a frequency? Are you trying to send/receive serial data?

A clearer explanation of what you are trying to do would be useful to someone who might help you, or might give somebody an idea for their own tinkering.

The two boards are sending over RGB data that I convert from 8-bit RGB into a frequency range of 85-400hz. This (rather odd) protocol is part of an art installation I'm making. The chosen frequencies are there because those frequencies are most common in human-speech. The fact that this way of communicating has a high chance for noise caused by human-speech and thus creating artefacts in the data is an essential part of it. Hope it makes a little more sense.

As for the rest of the discussion;
I understand the way the algorithm works much better now, but I'm still looking for a way to start/stop the detection when I need to. (calling notefreq1.begin() doesnt seem to fully re-initialise).
Because if I can't do that, it might be highly influences by the tone before it was actually supposed to listen (as notefreq1 is continuously sampling).
So I basically just need a way to at least empty all buffers and re-start the cycles. If anyone has a pointer on such a thing that would be awesome.
 
I apologize, you misunderstood my meaning.
Haha, no problem.

The two boards are sending over RGB data that I convert from 8-bit RGB into a frequency range of 85-400hz. This (rather odd) protocol is part of an art installation I'm making. The chosen frequencies are there because those frequencies are most common in human-speech. The fact that this way of communicating has a high chance for noise caused by human-speech and thus creating artefacts in the data is an essential part of it. Hope it makes a little more sense.

As for the rest of the discussion;
I understand the way the algorithm works much better now, but I'm still looking for a way to start/stop the detection when I need to. (calling notefreq1.begin() doesnt seem to fully re-initialise).
Because if I can't do that, it might be highly influences by the tone before it was actually supposed to listen (as notefreq1 is continuously sampling).
So I basically just need a way to at least empty all buffers and re-start the cycles. If anyone has a pointer on such a thing that would be awesome.

So the incoming audio is a human voice that you are trying to detect what frequency they are signing at? Not sure how you would "re-start the cycles" but you could look at source code and add a kind of manual switch to just return when not in use or clear the audio blocks? But how are you going to "catch" the incoming signals? The algorithm is designed to continually look for a valid fundamental frequency and restarting or re-initializing everything would probably just make it not work. Can you explain what is not working?
 
85Hz would need about 10 Blocks for it to detected, this can be set here (https://github.com/PaulStoffregen/Audio/blob/master/analyze_notefreq.h#L40)

Setting less blocks means the algorithm will complete faster at the expense of not being able to detect lower frequencies though you should really test it with real known signals to dial it in.

First post - so I hope I'm asking my question in the right place. I've been reading many of the threads for the Note Frequency detect algorithm and have it up and working on my breadboard (teensy 4.1, audio shield with mic input). Similar to this thread, I'm interested in a faster detection of pitch and want to change the number of audio blocks here - #define AUDIO_GUITARTUNER_BLOCKS 24

That said, I don't quite get how to make and/or link a custom library in Teensyduino - and also what library to edit (I'm guessing it is the Audio.h that is part of the auto generated code from the Audio System Design Tool). At the moment - the library and build process is all a bit of magic. Is there a tutorial or good thread someone can point me to - so I can educate myself on it?

Thanks for a great library and platform!
 
Last edited:
Back
Top