Low latency FFT to OSC for AV-shows

Status
Not open for further replies.
Hi everyone!

I'm new to this forum but have been using Teensy for some years now. I do a lot of experimenting with audiovisual live-shows and real-time audio analysis of synths to generate lighting effects. I have always been using Lightjams for this, but I want to build an embedded system with the assumption that I can reduce latency and make my system a bit more user friendly.

The general idea is that the code applies FFT (max 20 bins) over an audio input, does some simple math to make the signal more dynamic, and sends the data using OSC to my lighting software. By sending OSC to the Teensy I want to control a couple of parameters, like the dynamics multiplier and the OSC destination.

Now audio-quality isn't really important in this scenario. It just needs to be fast and have a good dynamic range. So I was wondering, is it better to use the audio-adapter or to connect a jack directly to pin A2 and A3 (with the recommended circuitry)?

And if anybody else has some tips for how I can reduce latency, they are most welcome :) every ms counts!

Thanks!
 
Well, I am pretty sure that using the Teensy is going to be an improvement on my current situation. I am primarily wondering if the audio adapter gets audio input to the Teensy quicker than the using A2/A3 pins.
 
Both are 44.1 khz.
With the audio library.
Nobody is able to see a difference if your light is some some samples faster or not.
Not even Chuck Norris.

It makes more sense to optimize your application.
 
44.1 khz is the sample rate, that doesn't say a lot about latency right? I was wondering because the audio shield communicates the audio input to the Teensy with I2S. So I'm assuming that this extra step takes time, but I might be wrong.

The point is, that the moment a synthesizer (for example) makes a sound, I need a usable OSC signal to the right destination as quick as possible. To give you an idea, in my current system the signal is received by an audio interface, processed by MAX/MSP, received by Lightjams, FFT inside Lightjams and than it's used to create a lighting effect. Sometimes I even have to run MAX/MSP on a different computer and share audio between the two, also creating a lot of latency. I have to do this because my laptop is already transmitting 120+ universes of sACN.

There are more reasons than just latency that I want to replace all of this equipment with a Teensy sending OSC.
 
Last edited:
44.1 khz is the sample rate, that doesn't say a lot about latency right? I was wondering because the audio shield communicates the audio input to the Teensy with I2S. So I'm assuming that this extra step takes time, but I might be wrong.
The time is equidistant, so it plays no role.
 
If you are worried about such short periods of time, you should also determine the distance of the listeners to the speakers.
With these short times the speed of sound is not negligible and is a factor.
Best is to use cages for recipients ;) - useful in these time anyway (corona)
Using a FFT for a single audio sample is a little challenge, too... but I guess you'll succeed.

No, seriously:

Before you tweak these things, you should check all the other places.
First of all, your code.
 
Well that's not something that is in my control every time. Fortunately light travels faster than sound, so it actually makes up for the latency :)

Why I ask is because I want to decrease the amount of times the audio signal goes from one device to the other. Currently there are just too many steps to get the desired signal at the desired place. The latency is noticable. And I want to decrease the amount of parts that I use as it becomes less complicated and cheaper :)
 
You can set shorter block size for the audio-library. This will reduce the lag. You can set it 32 bytes.
(writing this... I'm not if the Audio-libs FFTs support this - you should verify this - if not, use your own FFT-code.)
 
Status
Not open for further replies.
Back
Top