Audio Processing

Status
Not open for further replies.

markvr

Active member
I'm intending to use a Teensy and some ws2811 ribbon to build a fancy dress costume that reacts to music. I'd like the music reactivity to be quite good though, not just a simple vu meter. Much to my joy(!), the teensy (well, ARM really) libraries have FFT functions which saves some work, but I'm really looking for decent realtime beat detection.

Has anyone done anything similar, or can point me at some libraries that could do this on the Teensy? Googling around I can't find ANY libraries that do this in real time, so thought I'd ask on here. My DSP knowledge isn't huge, it kinda stops at being able to use FFTs for simple spectrum transforms, so I'm not really able to write my own algorithms for this :(

thanks!
 
Many people want good beat detection. I just recently got FFTs working with the new (not yet released) library, with proper 50% overlapping in real time. It's using about 25% CPU for 256 point FFT.

I'm planning to work on beat detection for the 2nd release. I've read about several algorithms that involve tracking changes across many frequency bands for better performance. It's going to take some work and a lot of testing with different music, but I believe I'll get this done sometime early next year.
 
beat detection

I think just about all music falls between 20 BPM (extremely slow) and 200 BPM (really fast), see for example Wikipedia:Temo and Stackexchange:bpm-levels

and in frequency terms that ranges from 0.33 Hz up to 3.3 Hz. So it seems if you do a FFT where Fmin <= 0.33 Hz, then the highest peak between 0.33 Hz and 3.3 Hz will be at the beat of the music. A FFT already includes all frequencies from Nyquist limit (Fmax = sample rate / 2) down to the minimum (Fmin = Fmax / # samples), so it's not clear to me any more complex algorithm beyond a straightfoward FFT would do any better (?). But I've never tried to implement beat detection so maybe there's more to it.

Hmmm.. I guess you want the beat detection in real time and the FFT would be after-the-fact. Or at least would have a lag time before it responds to a beat change.
 
There's actually quite a bit more to it. Music is actually quite complex with many extra sounds, which makes reliable beta detection that works across many types of music quite a challenge.

Algorithms fall into 2 categorizes, depending on whether they run real time or do static analysis on a recorded file. I believe looking for the beat's actual frequency+phase by FFT would not be a real time algorithm, because such long sample periods would be required for the FFT to have any precision at such low frequencies.
 
Many people want good beat detection. I just recently got FFTs working with the new (not yet released) library, with proper 50% overlapping in real time. It's using about 25% CPU for 256 point FFT.

I'm planning to work on beat detection for the 2nd release. I've read about several algorithms that involve tracking changes across many frequency bands for better performance. It's going to take some work and a lot of testing with different music, but I believe I'll get this done sometime early next year.

Cool it's great that you're working on this already - hopefully save me some work! Which library are you referring to? I know you're working on an audio board as well, is this related to that? I've got these fns -arm_cfft_radix4_f32 - working on my Teensy. They are officially deprecated, but the newer ones - arm_cfft_radix2_f32 - are only in CMIS v1.4.1 which (for everyone else reading!) the Teensy doesn't use yet.

I've read a number of algorithms in various papers, but being inexperienced with microcontrollers and DSP, I was hoping someone might have already implemented them.

@JBeale - picking the FFT bin with the most energy is an improvement on simply measuring all the energy (and what I am currently doing), but it's still quite inexact. Music is surprisingly complex, and humans are very very good at pattern recognition so what seems obvious to us isn't easy to code! Also my understanding is that because of spectal leakage, the bin with the highest value might not even be the strongest frequency.

I don't know how much of a delay would be caused by the DSP processing, if it's enough for humans to percieve, my ideal software would measure the BPM, and then update the LEDs with some form of prediction so it's on time.

Finally, if it's an easy question, how can I tell how much CPU is being used? I can't look at 'top' on a Teensy :) Not sure how I would do it and would be useful to be able to.
 
The new (not yet released) library is for the upcoming audio shield. It has a FFT object that uses the ARM math library stuff internally. The FFT object also has a window function and overlapping and averaging internally, with a very easy to use Arduino-like front end.

The library also provide CPU usage tracking for every audio processing object. ;)

I've put a LOT of work into this new library. I think you'll like it.
 
Yes, though I haven't written any of those. Any object in the audio library is technically an audio processing object, even input and outputs. But things like mixers, waveform generators, FFT are audio processing objects.
 
Status
Not open for further replies.
Back
Top