Teensy 4 maximum number of oscillators

murdog

Active member
I searched to see if there was any info on the maximum number of oscillators that Teensy 4 could support. Couldn't find anything so I ran a quick test and am sharing here just in case anyone in the future searches for it as well.

With a teensy 4.1 I ran 512 AudioSynthWaveforms with a sine wave for 128 different notes (4x each note) and the AudioProcessorUsageMax reported was 40%.
 
I just asked 'Notes and Volts' exactly this question for a Teensy powered laser controller. I need ~24 discreet waveform generators (6* x/y + 2 * LFO + 3 * RGB for 3 laser projectors), producing 12-bit stepped sine, cos, triangle, sawtooth, or square waveforms @ 0-1kHz. IOW, potentially 24.5 million points per second.
Furthermore, each waveform also needs the usual software (MIDI) controllable gain, frequency, and pitch, as well.
Still searching for some details regarding how the Teensy audio library generates its waveforms, which brought me to your post.
Apparently, judging by your results, Teensy waveforms aren't being produced by outputting stepped waveform sequences, one step at a time.
Research much more, I must, Master Yoda.
 
512 waveforms at 44.1kSPS is 22.6 million points per second.
24 waveforms at 44.1kSPS is only 1.06 million points per second.

You appear to have calculated 1024 x 1000 x 24, which makes no sense. 12 bits is 4096 levels, not 1024.

The bit depth of a signal is independent of the sample rate, and the sample rate only needs to be more than twice the
maximum frequency in the signal to represent it accurately (Nyquist/Shannon sampling theorem)

For 1kHz bandwidth signal you could run at an 8kSPS rate if you wanted, so 24 waveforms would only need
192000 points per second.

However you are talking about 24 _separate_ output signals, which is more than the T4 supports via I2S.
 
Thank you for your speedy reply, Mark.
Clearly, I have much to learn. As a wannabe maker, I've worked with ESPxx MCUs to control motors and sensors, but nothing using I2S, until now. However, I was a laserist from 1978-88 and understand my objectives from that POV. That's why I only calculated 1024 kSPS, because I don't need higher resolutions than that for galvos. The 2 extra MSBs could be used for flags, such as beam blanking between points, if required.
To be technically correct, the 6 x/y waveforms and LFOs will be summed together, as needed for 3 images and then routed into only 3 x/y analog output waveforms for 3 laser projectors. Each projector will also have 3 discreet RGB analog intensity inputs (summed waveforms and 0-1VDC offsets). IOW, there will 'only' need to be 15 output DACs.
I would like to begin with a Teensy powered single projector prototype module, with 6 waveform generators routable to XY + RGB analog outputs. Then simply upscale to 2 more projectors by duplicating that module. I'm thinking that each module would be on a different MIDI channel and controllable via DAW timelines.
Since yesterday, I've been studying the audio.h library and see that it is only providing 256 samples of mono, which is a starting point for me to modify. I've seen 44.1kSPS audio specs before but need to dig deeper to understand how it would apply to a variable clock rate for detuning, because the timing of the motion of my complex Lissajous image (aka cycloids) are achieved via controlled phasing/phase shifting.
I understand that harmonics can be achieved by Boolean shift division. TBH, you lost me at Nyquist/Shannon sampling theorem. But, rather than wasting your time by spoon feeding me, I'll google it.
Thank you for the information and have a Happy New Year!

Update: ��*♂️ �� I2S samples digital audio @ a rate of 44.1Ksps and uses the values of each sample to sum with other sources and for sound processing, right? IOW, everything has to be synchronized together at that 'baud'?
But, that's not the same as my variable clock, controlling the stepping speeds of my stepped waveform outputs. Sounds like I'll need output registers to hold the calculated step values until their sequences are pulled into the DACs, at the rate defined by the variable timers' triggers. IOW, I2S and outputs to the DACs, right?
Sorry to be typing while figuring it all out. Feel free to ignore me for ~ week, until I do my homework. Then, I'll have another set of dumb questions. lol
BR
 
Last edited:
Yes, digital signal processing is done at a fixed sample rate - you have a problem of waveform generation I think, which is not that different from
audio synthesis. I'd suggest learning about direct digital synthesis (DDS). Sometimes called an NCO (numerically controlled oscillator).

The AudioSynthWaveform and AudioSynthWaveformModulated classes can do a lot of the stuff I think you are needing, including modulating
frequency.
 
Thank you for the info, Mark, and Happy New Year!
I think that I'm already familiar with generating waveforms numerically with LSX. For example, given an reference image of 200 center points, the following formula would produce a sine wave with MIDI controlled gain: "sin(idx*pi*2)*.5+midi(nn)". But, LSX is probably interpreting that syntax beneath the GUI.
FM isn't all that useful for imagery, because sweeping through a frequency range usually produces spaghetti. Timing is achieve by precisely shifting phases. On the other hand, a well tuned FM square wave from an LFO can switch between two different images. AM is vital for pulsing portions or complete images on beat. Filters and envelopes can be useful to 'milk' more interest out of an image, too.
I've been researching the audio library and found the 256 element array of floating point values that generates the sine wave. I assumed that this approach was less demanding for the T4 than processing than complex formulas. Is that a false assumption?
Presumably, there's no problem with creating more custom arrays with more elements and modifying the sketch to point to those, as desired.
As a starting point, I could simply offset the existing array's index by 64 to generate a matching cosine to produce a quadrature pair for a simple quadrature oscillator. Modifying the offset would allow me to phase shift the Lissajous.
After lots of videos and reading the articles on pjrc, I'm rethinking my design concept to fit within the Teensy's GUI. Looks like multiple MIDI controlled waveform generators can feed into multiple MIDI controlled L&R mixers, before being output to X/Y analog line levels. Now, my challenge is applying some clever panel design to minimize the number of control knobs and buttons on the desk so that I can reach from one end to the other. lol
I see that the T4 is capable of stacking 2 audio boards for 4 channel output. But, each projector needs 5 for XYRGB. Would it be possible to add another DAC, connected directly to the T4's GPIOs? That would allow me to build a nice neat multi-waveform module for each projector with a single T4 in each module. Looks like I could even pass waveforms between modules over I2C to control multiple projectors with a common waveform.
A couple of years ago, I purchased a T3.6 with an audio board but it's only been sitting in my parts cabinet. Yeah, this has been brewing in my brain for that long. But, I didn't have the projectors nor knowledge of their hardware. Now, I've rebuilt and upgraded 2 of my projectors. So, I've pulled out the T3.6 and starting to have a go. Yes, I saw that the pinouts of the T3.6 audio module aren't compatible, but I'm assuming that those can be redefined in the sketch, before using a T4.
Thank you again for your time and please let me know whatever I'm overlooking or not understanding.
BR:cool:
 
Last edited:
Back
Top