Audio Timestamp

Status
Not open for further replies.

janbor

Active member
Im using teensy 3.1 with the audio shield to record audio. Im trying to process the audio, but the calculations are heavy so I end up loosing audio packets. This is fine, because I dont really need these subsequent packages, and I can simply turn off the audio queue. However, I was planning to use the audio data as my timing, but this is no longer possible. Is there a way I can get a timestamp for the individual audio data packages coming through the DMA?
 
Im using teensy 3.1 with the audio shield to record audio. Im trying to process the audio, but the calculations are heavy so I end up loosing audio packets. This is fine, because I dont really need these subsequent packages, and I can simply turn off the audio queue. However, I was planning to use the audio data as my timing, but this is no longer possible. Is there a way I can get a timestamp for the individual audio data packages coming through the DMA?

I don't think so, but one could augment the 'audio (user) data blocks' (not sure about terminology) by a variable that is either the time or simpler the block count.
Maybe, someone who understands the Audio library better could indicate where to add such a variable, or if block count already exists.
 
How about adding a new audio library object, whose update method simply stores your desired timestamp? You might use micros() or something from the Time library.
 
How about adding a new audio library object, whose update method simply stores your desired timestamp? You might use micros() or something from the Time library.

Interesting. Can I be sure there will be a fixed time delay between the audio object and my new timestamp-object?
 
The delay between ADC read and your node depends on your connection graph. My understanding though is that this is a constant for a given graph.
 
How about adding a new audio library object, whose update method simply stores your desired timestamp? You might use micros() or something from the Time library.

Did not work. I does work if the load on the cpu is very low, otherwise the delay between receiving an audio_block_sample and calling record_queue's update() is not fixed. Turn's out I can tune the load to the degree that I can count number of audio_block_samples extracted from the ringbuffer to get a clock.
 
Did not work. I does work if the load on the cpu is very low, otherwise the delay between receiving an audio_block_sample and calling record_queue's update() is not fixed. Turn's out I can tune the load to the degree that I can count number of audio_block_samples extracted from the ringbuffer to get a clock.

Do you NEED the audio library. I know, it is convenient to use it, if the processing sequence should be flexible. But if your processing goes to extreme, you have to design according to CPU load and available time. If you have a fixed application, e.g. taking in data estimating and averaging spectra, it may be worthwhile to do it without the flexibility of the Audio stream processing. And then you can easily add a timestamp to the acquisition isr without side-effects.
 
... it may be worthwhile to do it without the flexibility of the Audio stream processing.

Good idea. If I were to pursue this I would follow you suggestion. I have, however, decided to use the teensy simply to gather the data on an SD card and then use a laptop to do the processing after the fact. It will have to do for now. Without having tried what you suggest I feel strongly that the teensy is insufficient to do the required processing anyway: a standard autocorelation with a size 1000 vector.
 
I would like 1024 samples (or possibly even more), but if 256 is possible I might try it... Have you done it?
 
I would like 1024 samples (or possibly even more), but if 256 is possible I might try it... Have you done it?

I run a T3.1 with 144 MHz and can easily do four 256-point FFTs on 200 kHz sampling rate (processing time 0.72 ms within 1.28 ms available timeframe). If you are only interested in 44.1 kHz, you should have no problem doing FFT based autocorrelation. 1024 point FFT is a little bit less efficient, but not very much, also I have it not tried yet as it needs more data space.

Thinking about the application, you may simply try to build your own autocorrelation module within the audio library.
 
Hmmm. Interesting. I too would like 200 kHz in the future, but for now 44 is ok. I know it is possible, but I have never calculated the autocorrelation using fft. Maybe I should look into it. I imagine: if I can use teensy Audio to do the FFT then what little remains may be possible to do withing the limitations I have.

Thinking about the application, you may simply try to build your own autocorrelation module within the audio library.
Indeed. That would be really cool, and I am sure several people woule be interested in such a feature.
 
I do not know how you calculate the autocorrelation. If you do it by means of Wiener–Khinchin (ie. real_data->FFT->abs^2->IFFT), you might save some processing power by using a complex fft twice, since both the input values and the intermediate data is real valued (see eg PT10.HTM and spra291.pdf). This give you two FFT's for roughly the computing cost of one.

This method is also very efficent for stereo fft processing, where you also start with two real valued streams.
 
Last edited:
Hmmm. Interesting. I too would like 200 kHz in the future, but for now 44 is ok. I know it is possible, but I have never calculated the autocorrelation using fft. Maybe I should look into it. I imagine: if I can use teensy Audio to do the FFT then what little remains may be possible to do withing the limitations I have.


Indeed. That would be really cool, and I am sure several people woule be interested in such a feature.

I assume you know this, but for others

Autocorrelation using FFT

Code:
R = IFFT(abs(FFT(x))^2)

so it is really simple

there are some hidden details, but that can be worked out (single channel, dual channel etc.)
 
Janbor, sorry to confuse you: I meant that you can do both the FFT and IFFT using a single FFT block, instead of doing two steps separately, at twice the cost.

The stereo remark was not meant for you, but just as another example where people might benefit from having two FFT for the price of one.
 
Hmm. Really? Could you shed some more light on this? Could you write it using the Teensy Audio analyz_fft function?
 
Janbor, sorry to confuse you: I meant that you can do both the FFT and IFFT using a single FFT block, instead of doing two steps separately, at twice the cost.

The stereo remark was not meant for you, but just as another example where people might benefit from having two FFT for the price of one.

I guess, you mean you can do two FFT with one calculation ( as explained in PT10.HTM) (two spectra from two real time series).
Similar you can back-transform 2 spectra into two time series by a single IFFT.
BUT you CANNOT do simultaneously a forward AND an backward FFT.
 
In this case you can, since the input data is real valued. A fourier transform mean correlating with exp(-i w t). The inverse fourier transform means correlating with exp(+i w t). Since the input data is real valued, IFFT(x) = conj(FFT(x)).
 
In this case you can, since the input data is real valued. A fourier transform mean correlating with exp(-i w t). The inverse fourier transform means correlating with exp(+i w t). Since the input data is real valued, IFFT(x) = conj(FFT(x)).

If you mean using the same function, you are nearly correct, but as you say IFFT(x) <> FFT(x)
but you cannot do FFT and IFFT at the same time with one operation
You have to do one after the other (first FFT and then IFFT)
 
Last edited:
Ok not at the same time. Sorry, I am always working with streaming data, forgot to mention it. You actually calculate the FFT of the present data and the IFFT of the previous block of data, the idea is like pipelining.
Edit: I read the previous posts as you cannot do it mathematically, instead of triggering on the scheduling issue. The scheduling seemed obvious to me.
 
Last edited:
@Janbor, regarding your question about the code: I might prepare an example, but not before the weekend
 
Status
Not open for further replies.
Back
Top