Hello,
this question is regarding the usage of the Audio library on a Teensy 4.0 or 4.1.
i know there are already some threads on band limited waveform generation etc. here in the forum, but i have a specific question, which hopefully will suffice as a reason to create a new thread.
Is it feasible to use an 8-pole 48dB/octave (4 stages) biquad lowpass filter object (set to somewhere between 15 and 20 kHz corner freq.) after each waveform oscillator instead of (or in addition to) using band limited waveforms?
The reason behind this is, that i would like to implement a sync function in the existing waveform oscillator code, which of course introduces a step response in the waveform and might lead to aliasing. I don't know that much about DSP-coding, so i wouldn't be alble to do this with band limited step responses (BLEPs etc.).
My idea then was that instead i could maybe use a steep lowpass filter after each oscillator to mitigate or reduce the aliasing effects. From the processing power perspective it should be possible on a Teensy 4 i think, because i only want to implement one voice (with multiple oscillators etc.) per Teensy, which would then be fed via an i2s-DAC into an analog signal processor chip (4-pole filter and VCAs, AS3372). So the Teensy would only generate 3 or 4 raw waveform oscillators (with sync capabilities), noise and all the control signals (Envelopes and LFOs) for 1 voice.
Also a question is, if it's necessary to put a biquad filter after each oscillator, or if it would suffice to put one biquad filter at the end of the digital signal chain just before the i2s output. Where exactly does the aliasing happen? Is it right at the output (or processing of the output) of an oscillator (or any object, which produces a non-band-limited step response in the audio stream), or is it at the point of digital to analog conversion? Since the filters in the i2s-DAC do not remove the aliasing, i would assume that aliasing happens at the point of digital audio stream generation or the next processing step (even if it's only mixing).
I'm thankful for any clarification of my misconceptions and any advice on this topic.
Many thanks and best regards
Neni
this question is regarding the usage of the Audio library on a Teensy 4.0 or 4.1.
i know there are already some threads on band limited waveform generation etc. here in the forum, but i have a specific question, which hopefully will suffice as a reason to create a new thread.
Is it feasible to use an 8-pole 48dB/octave (4 stages) biquad lowpass filter object (set to somewhere between 15 and 20 kHz corner freq.) after each waveform oscillator instead of (or in addition to) using band limited waveforms?
The reason behind this is, that i would like to implement a sync function in the existing waveform oscillator code, which of course introduces a step response in the waveform and might lead to aliasing. I don't know that much about DSP-coding, so i wouldn't be alble to do this with band limited step responses (BLEPs etc.).
My idea then was that instead i could maybe use a steep lowpass filter after each oscillator to mitigate or reduce the aliasing effects. From the processing power perspective it should be possible on a Teensy 4 i think, because i only want to implement one voice (with multiple oscillators etc.) per Teensy, which would then be fed via an i2s-DAC into an analog signal processor chip (4-pole filter and VCAs, AS3372). So the Teensy would only generate 3 or 4 raw waveform oscillators (with sync capabilities), noise and all the control signals (Envelopes and LFOs) for 1 voice.
Also a question is, if it's necessary to put a biquad filter after each oscillator, or if it would suffice to put one biquad filter at the end of the digital signal chain just before the i2s output. Where exactly does the aliasing happen? Is it right at the output (or processing of the output) of an oscillator (or any object, which produces a non-band-limited step response in the audio stream), or is it at the point of digital to analog conversion? Since the filters in the i2s-DAC do not remove the aliasing, i would assume that aliasing happens at the point of digital audio stream generation or the next processing step (even if it's only mixing).
I'm thankful for any clarification of my misconceptions and any advice on this topic.
Many thanks and best regards
Neni