Audio Library

Status
Not open for further replies.
I want to implement this high pass filter, but this is bugging the hell out of me:

AudioMemory(12);

I get that it allocates 12 copies of this structure:
Code:
typedef struct audio_block_struct {
	unsigned char ref_count;
	unsigned char memory_pool_index;
	int16_t data[AUDIO_BLOCK_SAMPLES];
} audio_block_t;

But I can't for the life of me see any rhyme or reason to why the wav player example allocates 5, the midi example allocates 15 and the filter example allocates 12.

I would assume in the wav player example since it's playing stereo you have two buffers for the input,, left and right, and two for the output. That's four. So why allocate 5?

And with the filter example, only the left input is used, but I guess maybe it has to allocate both? So that's two. Then you've got the filter. Again done on only one channel, but we'll say two for that. And then the output needs two. So worst case it seems like there should be six, not 12.

Are you allocating extra memory because you didn't feel like calculating exactly how much was needed for each example, or is there something else going on behind the scenes that requires the additional buffers?
 
Last edited:
But I can't for the life of me see any rhyme or reason to why the wav player example allocates 5, the midi example allocates 15 and the filter example allocates 12.

Ah yes, this is of course part of the joy of being an early adopter with beta test code before documentation is written.

Really, truly, I'm going to stop working on the code next week and write documentation....

In the meantime, maybe this will help. These 6 functions allow you to see the library's resource usage:

AudioProcessorUsage()
AudioProcessorUsageMax()
AudioProcessorUsageMaxReset()
AudioMemoryUsage()
AudioMemoryUsageMax()
AudioMemoryUsageMaxReset()

The CPU usage is an integer from 0 to 100, and the memory is from 0 to however many blocks you provided with AudioMemory().

The normal version tell you the amount used at that moment. The "Max" one tells you the maximum that has ever been used, which is really much more useful. The "MaxReset" ones reset the maximum value.


Are you allocating extra memory because you didn't feel like calculating exactly how much was needed for each example,

Yes, exactly.

And honestly, that's the general approach. Allocate a bunch of memory and see (hear) if things work. If you want to know how much is really needed, put something like this in loop()

Code:
    Serial.println(AudioMemoryUsageMax());

But consider the memory usage can vary if you build a complex audio path. Some objects like the waveform generator only allocate memory when they're generating output. If you connect an object's out to multiple inputs, the library uses shared copy-on-write memory management, so the amount of memory actually allocated depends on whether the receiving objects request write access. Some objects, like the new fader, use different access at different times (eg, read-only when not changing the level, but writeable while fading in or out). Memory usage can also change depending on the order you create the objects, which determines the order they are updated and push data to each other.

The library has AudioMemoryUsageMax() so you can observe how much memory has actually be used at the worst case.
 
I'm having a new issue.

I'm trying to get a high pass filter working, but something's not right.

Here's my setup code:

Code:
  int HPF[8];

  // Create the Audio components.  These should be created in the order data flows, inputs/sources -> processing -> outputs
  AudioPlaySDcardWAV wav;
  AudioFilterBiquad  filter(HPF);
  AudioOutputI2S     dac;
  
  // Create Audio connections between the components

  //AudioConnection c1(wav, 0, dac, 0);
  //AudioConnection c2(wav, 1, dac, 1);

  AudioConnection c1(wav, 0, filter, 0); // Left 
  AudioConnection c2(wav, 1, filter, 1); // Right
  AudioConnection c3(filter, 0, dac, 0); 
  AudioConnection c4(filter, 1, dac, 1); 

  // Create an object to control the audio shield.
  AudioControlSGTL5000 audioShield;

Code:
void calcFilter(int * f, float a0, float a1, float a2, float b1, float b2) {

  const int n = 1073741824; // 2^30
  
  f[0] = a0 * n;
  f[1] = a1 * n;
  f[2] = a2 * n;
  f[3] = b1 * -n;
  f[4] = b2 * -n;
  f[5] = 0;
  f[6] = 0;
  f[7] = 0;
  
}

Code:
oid setup() {
 
  //Serial.begin(9600);
  //Serial.println("Debug");

  // Initialize audio module:
  
    AudioMemory(12); // (5) Audio connections require memory to work.  For more detailed information, see the MemoryAndCpuUsage example
  
    audioShield.enable();
    audioShield.volume(100); // 0..100
    audioShield.unmuteLineout(); 
    
    calcFilter(HPF, 0.9899745214054891, -1.9799490428109783, 0.9899745214054891, -1.9798485601163545, 0.9800495255056021); // 100hz HPF
    
    SPI.setMOSI(7);
    SPI.setSCK(14);
    
    SD.begin(10); // Init SD card for audio lib.


I tried it first without my filterCalc function, using the values you used in the example for the low pass filter, but the audio on my line out was super quiet, and the headphones were fairly quiet as well. Thinking this might simply be due to the audio at those frequencies being quiet, I then tried changing the parameters to create a high pass filter, and the audio on my headphones is now loud, but the left ear seems a lot louder than the right, and I'm getting no sound on the line out now.

Have I done something obviously wrong that you can see above, or should I put together another test program so you can find the bug?

My wav file is mono by the way, in case that's important. Same files as I sent you yesterday.
 
The library has AudioMemoryUsageMax() so you can observe how much memory has actually be used at the worst case.

With a complex filter setup, like if you were making a synth or an S3M player, is it even possible to know for certain that you've seen the worst case though?

And why do we allocate the memory in advance instead of the library automatically allocating it as needed?
 
Last edited:
Okay, so here's that simple test code again that used the numbered wav files, but this time with the filtering stuff I can't get to work in there. The comments explain the strange behavior I get with different setups:

Code:
#include <SPI.h>
#include <Audio.h>
#include <Wire.h>
#include <SD.h>

// Audio:

  int HPF[8] = {1062977024, -2125954048, 1062977024, 2125846144, -1052320192, 0, 0 , 0};
  
  // Create the Audio components.  These should be created in the order data flows, inputs/sources -> processing -> outputs
  AudioPlaySDcardWAV wav;
  AudioFilterBiquad  filter(HPF);
  AudioOutputI2S     dac;
  
  // Create Audio connections between the components
  
  // Use this setup, sounds normal.
  
    //AudioConnection c1(wav, 0, dac, 0);
    //AudioConnection c2(wav, 1, dac, 1);

  // Use this setup, sound from left ear at full volume, right at half.
    
    // Comment these out get no audio.  
    AudioConnection c1(wav, 0, filter, 0); // Left     
    AudioConnection c2(filter, 0, dac, 0);
    
    // Comment these out, still get full volume left ear, half volume right even though there should be no input to right channel.
    AudioConnection c3(wav, 1, filter, 1); // Right
    AudioConnection c4(filter, 1, dac, 1); 

  // Create an object to control the audio shield.
  AudioControlSGTL5000 audioShield;


// ----------
// Timing:

  unsigned long time;       // Current global time in miliseconds.
  unsigned long lastTime;   // Previous global time.
  unsigned long timeDelta;  // Time since last loop.  
  float timeDeltaSec;       // Time since last loop in seconds.    
  

void setup() {
 
  // Initialize audio module:
  
    AudioMemory(12); // Audio connections require memory to work.  For more detailed information, see the MemoryAndCpuUsage example
  
    audioShield.enable();
    audioShield.volume(100); // 0..100
    audioShield.unmuteLineout(); 
    
    SPI.setMOSI(7);
    SPI.setSCK(14);
     
    SD.begin(10); // Init SD card for audio lib.

  // Record start time.
    time = millis();
    
}
    
void loop() 
{
 
  // Timing:
 
    lastTime = time;                           // Store start time of last update.
    time = millis();                           // Get current system time.
    timeDelta = time - lastTime;               // Calculate how long last update took, in milliseconds.
    timeDeltaSec = float(timeDelta) / 1000.0;  // Convert last update period into seconds.
  
  updateState();

    
}

Code:
void updateState() {

  // Statics: (These are initialized once and their value is maintained on each subsequent loop.)
 
    static int state = 0;
    
    static unsigned long songStart; // Song start time
    static byte song = 0; 
              
    const unsigned long songTime[] = {63656, 63708, 67522, 63708, 63708, 63708, 63447, 180100};
    //const unsigned long songTime[] = {10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000};
    
    const char *songFile[] = {"001.wav", "002.wav", "003.wav", "004.wav", "005.wav", "006.wav", "007.wav", "008.wav"};
     
    switch (state) {
    
      case 0: // Reset
          
        song = 0;
        state = 1;
        
        break;
        
        
      case 1:
  
        songStart = time; 
        wav.play(songFile[song]);
        state = 2;
       
        break;
        
         
      case 2: // Wait for song to end.
 
        if (time > (songStart + songTime[song])) {
        
          song++; // Next song.
          if (song > 7) { song = 0; } 
        
          state = 1;
        
        }  
    
        break;
      
    }
    
}

Only thing changed here is the setup for the audio in the first bit of code.
 
Oh I forgot to mention, with the above filter code in place I no longer get any output on my line out. Only the headphones work.
 
I have a transducer hooked through an amplifier that is hooked to a teensy 3.1's analog output, I do not have the teensy 3's audio board. I copied most of the "PlayFromSketch" example and for my use I only tested with 1 sound playing on a button push. When I trigger on falling edge on the button I play the "AudioSampleKick", I feel the transducer kick once, then it stops.

I was under the impression that after you do a "play" on an audio source it would play to the end, in my case it appears to only play once. The only difference in my case from the example is I don't do anything with setting up the AudioShield object at all (as I don't have one). Any ideas here on what I might be missing?
 
Last edited:
When I trigger on falling edge on the button I play the "AudioSampleKick", I feel the transducer kick once, then it stops.

Yes, exactly. The "Kick" sample is a single hit on a kick drum. You can follow that link to the freesounds website to hear what it's supposed to do. In fact, here's the link:

http://www.freesound.org/people/DWSD/sounds/171104/

For a longer sound, try the gong (triggered by pin 4) which plays for 10 seconds, or the cash register (triggered by pin 5).
 
Greetings!

I've just began familiarizing myself with this lib, and stuck a little bit. I can't get to understand how does the "Miditones.ino" example sketch work (and what is the function exactly). Anybody could give me a brief explanation? I'm kind of a beginner in coding, but one thing instantly meets my eye is that there is an "audioSyntWaveform" object named "sine2" declared but it has no connections, and i don't see it being used anywhere.

Thanks a lot in advance.
 
Start with this minimal version of that sketch. I've removed all the extraneous code and comments.
Code:
#include <Audio.h>
#include <Wire.h>
#include <SD.h>
#include <SPI.h>

AudioSynthWaveform mysine(AudioWaveformSine);

AudioOutputI2S dac;

AudioControlSGTL5000 codec;

// Connect the tone to the left and right channels
// The original code only output to the left channel
AudioConnection c1(mysine, 0, dac, 0);
AudioConnection c2(mysine, 0, dac, 1);

void setup() {
  Serial.begin(115200);
  while (!Serial) ;
  delay(2000);
  Serial.println("***************");

  // Audio connections require memory to work.  For more
  // detailed information, see the MemoryAndCpuUsage example
  AudioMemory(15);
  
  codec.enable();
  codec.volume(50);
  
  Serial.println("Begin AudioTest");

  mysine.frequency(440);
  mysine.amplitude(.8);
  delay(1000);
  mysine.amplitude(0);  
 
}

void loop()
{
}

This generates a 440Hz tone for one second.

Pete
 
Start with this minimal version of that sketch. I've removed all the extraneous code and comments.
Code:
...

This generates a 440Hz tone for one second.

Pete

Yes, i understand that. Also, You've modified the code and get rid of the unused "sine2" object, and attached the output of "mysine" to the other input of the dac object. As far as i understand things going on, shouldn't these things be this way in the example sketch?

Becasue this is the behaviour i was expecting after looking at the code, but from the name (and the "william_tell_overture.c" attachment wich i believe is some midi note data converted to raw progmem code?!) I assumed that the "mysine" object would generate some sine waves according to the note information in the attached .c file.
 
Last edited:
The example sketch was probably an early version of something that hasn't been finished yet. That particular example doesn't show up in the Arduino IDE list unless you change the name of the directory and/or the .ino so that they are the same.
I might try to write code to play that tune, but don't hold your breath.

You might find the DialTone_DTMF example useful. It generates two tones and sends them to the mixer.

Pete
 
Teensy 3.1 + Audio Adapter have arrived (thanks again Paul), I've attached them and tested round trip (line-in, apply-filter(ok, no noticeable artifacts on changes)-and-volume, headphone-out) which seemed to work pretty good.

I am still writing bits and pieces and testing stuff, possibly as much as a week before I make my first pull request but I might try to get one in by tomorrow night my time (is 8:50am Thursday where I am now) with some of the filter stuff.
 
@Kondi
I've attached a modified PlayMidiTones sketch which plays the midi file. It uses the library's AudioSynthWaveform sine wave generators so it clicks a lot when tones are turned on or off. It would be better if much of the tone generation was done entirely within the library so that the onset and offset of a tone could be ramped to prevent the click.
Someday, maybe :)

View attachment PlayMidiTones_a.zip
Pete
 

Thanks for the reply, I tought I was looking over something.

Another question: I believe the nature of the sampling should introduce 2x2.9=5.8ms RTL (round trip latency). I don't have the proper equipment, could anybody measure this?

I think what a really useful feature would be is to be able to create zero-latency pass through connections, to be able to implement DI box-like functionality. For example, to have an AudioInputAnalog/audioInputi2s sub-class that would forward each individual sample right after( in the next DAC cycle to be axact) the ADC cycle to one of the output candidates, and besides that, after all the 128 samples are gathered for the audio block, it could pass it through for further mangling in a regular update call.

I don't know if the implementation of audio streams would enable this, but i'm sure this would be a very useful feature.

@Kondi
I've attached a modified PlayMidiTones sketch which plays the midi file. It uses the library's AudioSynthWaveform sine wave generators so it clicks a lot when tones are turned on or off. It would be better if much of the tone generation was done entirely within the library so that the onset and offset of a tone could be ramped to prevent the click.
Someday, maybe :)

View attachment 1382
Pete

That was fast, but unfortunately i didn't recieve my audio board yet so i can't try it. I was looking at that example because i'm starting to create some new audio effect objects, wich would involve MIDI. But unfortunately, interpreting MIDI is not so easy, so i think i will go the music-XML way.
 
Last edited:
I think what a really useful feature would be is to be able to create zero-latency pass through connections, to be able to implement DI box-like functionality. For example, to have an AudioInputAnalog/audioInputi2s sub-class that would forward each individual sample right after( in the next DAC cycle to be axact) the ADC cycle to one of the output candidates, and besides that, after all the 128 samples are gathered for the audio block, it could pass it through for further mangling in a regular update call.

That would be a whole new library. Maybe some of the existing code might help you get a start. I do not know if the DMA channels will help. You'll probably need to use the I2S interrupts. Processing individual samples will not be nearly as efficient, but the ARM chip is pretty fast, so you still might manage to get it to do something useful.

But do not fool yourself into thinking this might be possible in the context of this audio library, which is fundamentally based on block processing.
 
I've discovered another bug. Volume control does not seem to work on the line out:

Code:
    audioShield.enable();
    audioShield.unmuteLineout();
    audioShield.volume(5);

I tried unmuting the line out before and after changing the volume and in neither case did it have any effect.
 
I've discovered another bug. Volume control does not seem to work on the line out:

Code:
    audioShield.enable();
    audioShield.unmuteLineout();
    audioShield.volume(5);

I tried unmuting the line out before and after changing the volume and in neither case did it have any effect.

As far as I can tell that is intentional and that is logical - proper line-out is more or less 'set appropriate level and forget' kind of stuff. I am going to add both line-out level control and DAC level control (soon!) but not to be used as volume controls.
 
As far as I can tell that is intentional and that is logical - proper line-out is more or less 'set appropriate level and forget' kind of stuff. I am going to add both line-out level control and DAC level control (soon!) but not to be used as volume controls.

What constitutes a "proper" line out? These days, a line out and a headphone jack are virtually interchangeable. According to the codec's data sheet, the headphone jack on the audio shield could have functioned as one if Paul added a couple capacitors.

Also, I'm pretty sure on any PC out there the line out's volume level changes as you change the mixer settings on the PC.

What's so terrible about allowing the line out volume level to be adjustable anyway? "Standards" be damned. Standards change. The standard now is to have a jack that can function both as a headphone and line out.

Anyway, we need volume control on the line out because otherwise, if we want to connect an amplifier that is powered from the same power source as the Teensy, we would have to change every channel's gain individually to hack volume control in.

If you don't want to change the volume level of the line out, just don't change it. But I need that capability on both the line out and the DAC output. (I have not tested the DAC output yet, but now I'm wondering if that has volume control either.) It's silly not to have the feature. It's useful, and necessary when one is using an amp. Not all amplifier boards have volume controls and even when they do they're not always conveniently located so being able to connect a volume pot to the teensy to adjust the volume instead is nice.
 
If the functions for DAC volume control and Line-out level control I am going to submit to the library are accepted then they will give you this functionality basically.

The DAC volume control will influence both HP and LO outputs the same and the routines (left, right, both) that I have written to control them will give you functionality more or less just like the PC if you use those routines instead, they use '0-100'% - this will not override the HP volume level setting, that is applied after the DAC; similarly the LO level adjustment is applied after the DAC as well.

The LO level control routines I have written don't use the '0-100'% because the 'right' value for LO_VOL_CONTROL_LEFT/RIGHT calculates to 0x0D according to the datasheet (and my, albeit potentially limited, ability to understand it) and it is to attenuate the output to 'full-scale' for a Line-out. The datasheet goes on to say that once the 'correct' value has been determined the value can be varied to attenuate the signal(s) in +/- 0.5dB steps but my logic says that if 0x0D should make an equivalent of 0dB then you can only make attenuation between -(13*0.5) and +((31-13)*0.5) dB using this control.

My statement '... but not to be used as volume controls' was wrong, obviously the DAC volume control is just that, a volume control :eek:
 
Status
Not open for further replies.
Back
Top