AudioSynth Waveforms

Status
Not open for further replies.

Teenfor3

Well-known member
There is AudioSynthWaveform and AudioSynthWaveformModulated in the Audio library. The main difference is the Mod one has 2 inputs for modulation. If I want to just simply play a waveform at various frequencies does it matter which I use. If I set AudioSynthWaveformDC amplitude level = 0 and patch to input 0 both waveform objects give me the same output...??? Changes to frequency for Waveform is by DDS skipping samples, interpolation etc so higher frequencies will have less samples per cycle, Does WaveformModulated work this way as well or is the sample rate of play modulated or changed and so playing all the samples per cycle at all frequencies.
 
Thanks for reply.
My question was more on the quality of sound out as a result of using waveform as opposed to waveform for simple playing of waveform and maybe making some changes to freq etc but not fast modulation. This (below) is a simple sketch using 3 methods. I don't see or hear much difference when using either, but from what you are saying both use DDS etc and changes in freq by playing less or more samples from the original wave so in theory will degrade the signal at higher frequencies. I thought maybe the waveformMod continued to play the full samples of the waveform at different rates and so would preserve the waveform detail better at high freqs.
So for simple sketch like below is it any more efficient to just use the waveform rather than waveformMod. ...??

You say...."The 1st input modulates frequency or phase depending on the current setting," What do you mean by "current setting" ?? What changes it from Freq modulation to Phase Modulation. and What is phase ref to ??
PS..... Edited .....I see what changes it to phaseModulation now .......... waveformMod1.phaseModulation(180); // degrees ....need to add this line to sketch


Code:
 #include <Audio.h>

 // sawtooth .......rev up sound....
const int16_t tooth_saw[256] = {
0,
-13440,
-26250,
-26040,
-25830,
-25620,
-25410,
-25200,
-24990,
-24780,
-24570,
-24360,
-24150,
-23940,
-23730,
-23520,
-23310,
-23100,
-22890,
-22680,
-22470,
-22260,
-22050,
-21840,
-21630,
-21420,
-21210,
-21000,
-20790,
-20580,
-20370,
-20160,
-19950,
-19740,
-19530,
-19320,
-19110,
-18900,
-18690,
-18480,
-18270,
-18060,
-17850,
-17640,
-17430,
-17220,
-17010,
-16800,
-16590,
-16380,
-16170,
-15960,
-15750,
-15540,
-15330,
-15120,
-14910,
-14700,
-14490,
-14280,
-14070,
-13860,
-13650,
-13440,
-13230,
-13020,
-12810,
-12600,
-12390,
-12180,
-11970,
-11760,
-11550,
-11340,
-11130,
-10920,
-10710,
-10500,
-10290,
-10080,
-9870,
-9660,
-9450,
-9240,
-9030,
-8820,
-8610,
-8400,
-8190,
-7980,
-7770,
-7560,
-7350,
-7140,
-6930,
-6720,
-6510,
-6300,
-6090,
-5880,
-5670,
-5460,
-5250,
-5040,
-4830,
-4620,
-4410,
-4200,
-3990,
-3780,
-3570,
-3360,
-3150,
-2940,
-2730,
-2520,
-2310,
-2100,
-1890,
-1680,
-1470,
-1260,
-1050,
-840,
-630,
-420,
-210,
0,
210,
420,
630,
840,
1050,
1260,
1470,
1680,
1890,
2100,
2310,
2520,
2730,
2940,
3150,
3360,
3570,
3780,
3990,
4200,
4410,
4620,
4830,
5040,
5250,
5460,
5670,
5880,
6090,
6300,
6510,
6720,
6930,
7140,
7350,
7560,
7770,
7980,
8190,
8400,
8610,
8820,
9030,
9240,
9450,
9660,
9870,
10080,
10290,
10500,
10710,
10920,
11130,
11340,
11550,
11760,
11970,
12180,
12390,
12600,
12810,
13020,
13230,
13440,
13650,
13860,
14070,
14280,
14490,
14700,
14910,
15120,
15330,
15540,
15750,
15960,
16170,
16380,
16590,
16800,
17010,
17220,
17430,
17640,
17850,
18060,
18270,
18480,
18690,
18900,
19110,
19320,
19530,
19740,
19950,
20160,
20370,
20580,
20790,
21000,
21210,
21420,
21630,
21840,
22050,
22260,
22470,
22680,
22890,
23100,
23310,
23520,
23730,
23940,
24150,
24360,
24570,
24780,
24990,
25200,
25410,
25620,
25830,
26040,
26250,
26460,
13440,
0
};

// GUItool: begin automatically generated code

AudioSynthWaveformDc           dc1;            //xy=131,185
AudioSynthWaveformModulated    waveformMod1;   //xy=385,239
AudioSynthWaveform             waveform1;   //xy=385,239
AudioOutputAnalog              dac1;           //xy=582,240
AudioConnection                patchCord2(dc1, 0, waveformMod1, 0);
AudioConnection                patchCord3(waveformMod1, 0, dac1, 0);
// AudioConnection                patchCord3(waveform1, 0, dac1, 0);

// GUItool: end automatically generated code

void setup() {
  AudioMemory(30);
  
 // this bit for waveformMod object both freq steps and dc control steps xxxxxxxxxxxxxxxxxx
 
       waveformMod1.begin(0.0, 0, WAVEFORM_ARBITRARY);  
     waveformMod1.arbitraryWaveform(tooth_saw, 1200); 
    waveformMod1.amplitude(0.8);
    waveformMod1.frequency(500);
    waveformMod1.frequencyModulation(2); // octaves
    
//     xxxxxxxxxxxxxxxxxxxxxxxxx


// this bit for waveform only freq steps  xxxxxxxxxxxxxxxxxxx

 //  waveform1.begin(0.0, 0, WAVEFORM_ARBITRARY); 
 //    waveform1.arbitraryWaveform(tooth_saw, 1200); 
//  waveform1.amplitude(0.8);
//  waveform1.frequency(500);

  //     xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  
   dc1.amplitude(0.0);
   
}  // end setup

void loop() 
{
  
 // dc1.amplitude(0);
  
// for (int i=100; i <= 1000; i++)   //  inc through range of freqs
// {
//    waveformMod1.frequency(i); // for using with waveformMod freq range
//    waveform1.frequency(i);    // for using with waveform freq range
//    delay(100);   // needs time to do it at each step
//  }

   for (float i= -0.5; i <= 0.5; )   // inc dc source val to control freq Mod
   {  
   dc1.amplitude(i);  
    delay(100);   // needs time to do it at each step
   i = i+0.01;
   }

}  // end of loop
 
Last edited:
Waveform synthesis is done at the audio sample rate, which defaults to 44.1 kHz. Sampling a signal at a fixed rate means the data can only represent spectrum up to half the sample rate, which is called Nyquist sampling theory. Lots of textbooks and academic websites mathematically prove Nyquist sampling theory, but unless you're a math genius, usually all that theoretical focus on math ends up making this simple but counter intuitive concept harder to understand. But if you're skeptical, rigorous mathematical proof exists for this advice I'm about to write...

If you use DDS with the arbitrary waveform feature and your lookup table contains a waveform with high frequency content, you will not "preserve the waveform detail better at high freqs". Instead you will encounter a problem called aliasing, which is pretty much the exact opposite of preserving better sound quality! As DDS tries to shift that arbitrary waveform up to higher frequency, the result is not what you might expect from experience by analog speedup like playing a vinyl record or cassette tape at higher speed. With analog speedup, the upper part of the original spectrum becomes inaudible very high frequencies which you can't hear (or the playback equipment can't reproduce). But with digital audio, all that higher than 22 kHz spectrum doesn't just go away. Oh if only it did. Instead it aliases back into the 0-22 kHz range, as if the 22 kHz barrier is like a perfectly reflective mirror. So if your waveform had 10 kHz spectrum when played 1:1 ratio and you speed it up by 3 times, where you would have gotten 30 kHz in a perfect analog system, with digital sampling that 30 kHz target is 8 kHz beyond the 22 kHz Nyquist barrier. So it aliases 8 kHz back down from 22 kHz, to become a sound at 14 kHz.

Because the behavior is a mirror-like reflection from the 22 kHz Nyquist frequency, it's easy to ignore or neglect a moderate amount of aliasing problem. It starts corrupting your sound at 22 kHz and works its way down. While it's easy to obsess about sound fidelity, the honest reality is almost nobody really hears fine details about 10-15 kHz when mixed together with strong sound in the normal ~100-4000 Hz hearing range. Sometimes people will falsely believe aliasing isn't a problem, or that "Nyquist" only applies to filters you use with an ADC or DAC, based on their own personal experience not being able to hear quite a lot of trouble resulting from deleting samples, because the problems first occur at higher frequency. But if your concern is to "preserve the waveform detail better at high freqs" you absolutely should avoid aliasing.

Proper use of DDS requires low-pass filtering your arbitrary waveform tables. You need to remove any high frequency content *before* DDS, which would have become more than 22 kHz in a pure & perfect analog speedup. To preserve the best sound quality, you would probably use several waveform tables, each differently low-pass filtered. When your target is a lower pitch note, you would use a table which has more high frequency content, so you end up with close to 22 kHz bandwith after the speedup. But then you must not use that table for higher pitch. You would use another waveform table with more of its high frequency content removed, so again the final DDS result ends up being close to 22 kHz bandwidth.

I know the concept of low-pass filtering your arbitrary waveform tables probably sounds like the opposite of what you would intuitively expect to be needed if your goal is preserving high frequency content and sound fidelity. But that is indeed the nature of digital audio and Nyquist sampling theory. If you don't believe it, plenty of academic sites and textbooks offer mathematical proof. And if you verify by actual usage, don't trust your hears for the top couple octaves - use a spectrum analyzer so you can really see the issue.


And just a couple quick technical details...

The DDS code keeps a lot of extra resolution on the waveform phase and uses linear interpolation between the nearest 2 samples in the waveform table, so you're not just playing back a fixed table of samples.

Regarding efficiency, the ultimate answer is to actually measure the actual & max CPU and memory usage. The audio library gives you functions to do that easily. But generally speaking, the modulated waveforms use more CPU time than the normal waveforms.
 
Thanks Paul for all the info. I don't doubt Teensy's capabilities, I think it is great, including all the libraries and the support services you provide.
....it was just to get a discussion on the use of the 2 waveform and waveformMod methods.....Thanks
 
Fun fact: the unique Sound of famous Synthesizers like the ppg Wave Series, the Waldorf MicroWave Series (and the iconic Waldorf Wave), Sequential Prophet VS, as well as older Korg Synths like the DW-8000 or MS-2000 heavily depend on these aliasing artifacts. The same synthesis principle "done right" with bandwith-limited Wavetables doesnt sound "right" in a sense like "sounds like the original".
So if you are into vintage digital sound just happily ingnore all that Nyquist Theorem stuff and naively do it the same way Wolfgang Palm did, who didnt know about all this.
 
So if you are into vintage digital sound just happily ingnore all that Nyquist Theorem stuff and naively do it the same way Wolfgang Palm did, who didnt know about all this.

When I was first reading Paul's amazingly in depth explanation (thank you Paul) I was thinking, "Musicians have a history of making use of the "limitations" of the technology of the day, I wonder if you could use these limitations to create unique sounds intentionally"
Sounds like it had already been done. (Germanium transistors and "less capable" opamps are just two of many examples of electronics that there prized for their limitations by musicians.)
 
Yes, indeed if you intentionally want distortion for a certain "classic sound", you can get it. The waveform synth doesn't enforce limits on the waveform table. That's your responsibility. You can give the audio library any 256 samples you like for it to use with DDS, with linear interpolation between the samples.

Regarding the original question about "playing all the samples per cycle at all frequencies" where the stated concern was about what "will degrade the signal at higher frequencies", aliasing is absolutely the enemy of high fidelity. For the stated goal to "preserve the waveform detail better at high freqs", the approach needed for best high frequency fidelity is to intentionally low pass filter the waveform table data.

When I earlier said "proper use", the context was this question about preserving the high frequency sound detail. If you intentionally want that sort of distortion, then of course what is "proper" depends upon your goals. If a classic sound you want needs certain distortion, I would agree the thing that is utterly wrong for high fidelity becomes proper if your explicit goal is that type of distortion.
 
Germanium transistors and "less capable" opamps are just two of many examples of electronics that there prized for their limitations by musicians.

@senorblasto: These werent used because of their limitations - it was cutting edge technology back then and the best available. Its just that you got used to these audible artifacts. You may want to read here http://wolfgangpalm.com/story/ about what Wolfgang Palm did "wrong" developing Wavetable Synthesis (used first by ppg and Waldorf) and here about all even more "wrong" things done to Vector Synthesis used in the Sequential Prophet VS by its developer Chris Meyer



@PaulStoffregen: any chance of switching off the linear interpolation for even more aliasing/artifacts/distortion?
 
Last edited:
Status
Not open for further replies.
Back
Top