WaveplayerEx

Perhaps the t4.1 can do this too, with lots of buffering. Its not that much the CPU speed. - More RAM.
There is no way to fix the SD-adressing issue - only workarounds.


btw- does it change anything if you add a line
Code:
track1.addMemoryForRead(6); // <- or try other, perhaps lower numbers
?
 
Agree, it's getting a good buffering scheme in place that would make a big difference. In theory Teensy 4.x with its 1Mb RAM could buffer 8 mono 44100Hz 16-bit tracks for 1s each, so it ought to be feasible. But tricky...

I just tried addMemoryForRead() and it only makes a difference to the size and interval of the reads: where the previous plot shows a "staircase" on the red and green traces as the SD card is progressively read out, with the extra memory there's just one read shown in the time I allow it to run. If I extend the run time I can see the bigger changes in filePos, which happen at longer intervals, so that part seems to be fine. However, as before, the first block output from track2 appears to emit audio which belongs to track1, then the wave data get read from the SD card and start getting played. Interestingly, the phase of the track2 data once it's from the correct file is correct, as far as I can see, so having finally read it in, the first block doesn't get used!
 
Can please post the file you use (or store them somewhere..)

I'd say it's pretty unlikely that a one player outputs data of a another one. But I'll look. Who knows :)
 
Sine wave files zipped and attached - let me know if you have any problems with them. Agreed, I couldn't see any way that could happen - extremely strange.

Cheers

Jonathan
 

Attachments

  • sineNN0.zip
    34.6 KB · Views: 61
Good news: I thought about it... the issue is due to the (too complicated) handling of 8 and 16 bit samples and a wrong calculation. I can do some more simplifications, I think.
I ll upload a fixed version this evening, or tomorow.

it's always good to review code after a few weeks..
edit: and: keep it simple.. :)
 
Last edited:
Updated github repo with the (hopefully fixed?) version., and your sinewave files (I delete them if you want).
 
No problem, do keep the sine waves, also any variant of my test code you think might be a useful example! Will pull the update and try it out soon, maybe even tonight.

Many thanks

Jonathan
 
OK, I've had a go with the test code and also a more fun 6-track saxophone recording, and all seems well now. Thanks for your efforts fixing that, much appreciated.
 
Hi Frank

Been giving this a bit of thought ... and noticed also your post on another thread ... and was wondering if the SD card reads could be moved outside the audio update interrupt entirely, by putting then into an EventResponder? I can't quite figure out how EventResponder works, though, or whether you can trigger it from the audio update system. It's not 100% necessary and introduces another dependency, but would avoid having to make the user poll an AudioPlayWav::ReFillBuffers() regularly from their foreground code.

Does that sound like a concept worth pursuing?

Cheers

Jonathan
 
Hi Frank

Been giving this a bit of thought ... and noticed also your post on another thread ... and was wondering if the SD card reads could be moved outside the audio update interrupt entirely, by putting then into an EventResponder? I can't quite figure out how EventResponder works, though, or whether you can trigger it from the audio update system. It's not 100% necessary and introduces another dependency, but would avoid having to make the user poll an AudioPlayWav::ReFillBuffers() regularly from their foreground code.

Does that sound like a concept worth pursuing?

Cheers

Jonathan


I have somewhere a version of the codecs (mp3 etc) that do that. You have to fill a buffer and the codecs just read the buffer.
I have thought about something like this several times. It should be able to read something like a "stream" from outside the library. I think this should be more of a general extension to the audio library, and could provide the mechanics to read it safely in update() by all kinds of "players". That would then be the most flexible.
So far I haven't had the right idea... since we're on Arduino, it should be easy to use, and should be as simple as possible. In the "mp3"-case above, it's not.

But so far I didn't have the right idea. And the next question is what are the chances that it will be merged.
 
The EventRepsonder uses yield() - unfortunately it is not called outside the Teensy ecosystem (e.g. by 3d party libraries)

Not sure, if I remember correctly, but its aim was that the user does not have to fiddle with interrupts.
 
This will all be very exciting anyway when the next-gen Teens comes with two cores.
I can't imagine how all that will work. It seems that no one but me* is thinking about it. It doesn't only concern files. It affects every device. And be it only SPI.
(How the compilation and handling of sourcefiles by GCC will work, is another interesting topic - the CM4 core has a different FPU, so they just can't use the same generated code, so more gcc runs - with different options - are needed.)

The path of least resistance would be to simply not use the 2nd core. Hm. Or to use it for special code, needed by the Teensy-software-core only. An interface for all devices could run there, perhaps including files.
That would mean, it would sleep most of the time.. not efficient, if you want speed. There will be a lot communcation between both cores then, which again means more waiting - on the high speed cm7 core, too.

An RTOS would be THE solution (ESP uses it), but so far I heard only that is not desired. So.. (Edit: Perhaps I would just grab all the ESP code (except wifi, bluetooth etc) and just port the hw-interfaces and add USB. As a first step.)
So I just wait, do nothing in this topic - to work on it makes sense only when you know: "what and how".




*I'm sure Paul had thought about it.
 
Last edited:
This is from the Arduino Audio Lib documentation:
Code:
/*

 Demonstrates the use of the Audio library for the Arduino Due

 Hardware required :
 *Arduino shield with a SD card on CS 4 (the Ethernet sheild will work)
 *Audio amplifier circuit with speaker attached to DAC0

 Original by Massimo Banzi September 20, 2012
 Modified by Scott Fitzgerald October 19, 2012

*/

#include <SD.h>
#include <SPI.h>
#include <Audio.h>

void setup()
{
  // debug output at 9600 baud
  Serial.begin(9600);

  // setup SD-card
  Serial.print("Initializing SD card...");
  if (!SD.begin(4)) {
    Serial.println(" failed!");
    return;
  }
  Serial.println(" done.");
  // hi-speed SPI transfers
  SPI.setClockDivider(4);

  // 44100Khz stereo => 88200 sample rate
  // 100 mSec of prebuffering.
  Audio.begin(88200, 100);
}

void loop()
{
  int count=0;

  // open wave file from sdcard
  File myFile = SD.open("test.wav");
  if (!myFile) {
    // if the file didn't open, print an error and stop
    Serial.println("error opening test.wav");
    while (true);
  }

  const int S=1024; // Number of samples to read in block
  short buffer[S];

  Serial.print("Playing");
  // until the file is not finished
  while (myFile.available()) {
    // read from the file into buffer
    myFile.read(buffer, sizeof(buffer));

    // Prepare samples
    int volume = 1024;
    Audio.prepare(buffer, S, volume);
    // Feed samples to audio
    Audio.write(buffer, S);

    // Every 100 block print a '.'
    count++;
    if (count == 100) {
      Serial.print(".");
      count = 0;
    }
  }
  myFile.close();

  Serial.println("End of file. Thank you for listening!");
  while (true) ;
}

Is that easy enough? We could add write(buffer, S); However, write() or prepare() seems to be blocking. No good.
 
It looked to me as if both Audio (as we understand it) and EventResponder are Teensy-only at the moment anyway. If some future Arduino-based EventResponder needed an explicit yield() call added (often enough) in loop(), that's not our problem - anyway, surely no Arduino will be capable of running the Teensy Audio library.

Agree it'd be better to start with a streaming class which just keeps filling buffers from <a place> until stopped, then build audio players and whatever else on top. Oh, and make it write-capable as well, for audio recording. With this structure you're then agnostic about whether the source is blocking or not, though of course you'd prefer not blocking (SD card reading using DMA, for example). What happens in the SD library if you kick off 6 simultaneous non-blocking reads? Either way you're still at the mercy of any SD card that suddenly decides to freeze for longer than your buffer time.

Apart from the fact that it's horrible code, I'm still not sure the example is simple enough - for more complex systems you still need to be sure you (a) remember to put .prepare() and .write() calls in at all, and (b) you do that often enough in your loop(). But maybe it's just that I'm naive about what EventResponder can do - if it still needs either loop() to re-run frequently enough, or a lot of calls to yield(), then it's not much use for our case. We might just as well say "to use AudioPlayWAVasync objects you must call AudioPlayWAVasync::fillBuffers() at least once for every Audio engine update" (recognising that 2.9ms is just true for the 44kHz 128-byte default engine).

Two cores could be interesting, indeed, though I suspect it'll cause mayhem as those who aren't used to multiprocessor systems find a lot of new ways to crash one or both cores. I'm working on a 2-core system at work myself, but one only runs a Bluetooth low-energy stack and radio, so I just have to use the API right on the "main core". Which I've got running FreeRTOS...

But I digress...

Yes, getting it merged would be good - I got the impression Paul would be keen to get this area cleaned up, but maybe it'd be better to have something more positive than that!

Cheers

Jonathan
 
I'm thinking of adding a x-channel 8 and 16 bit wave-file recorder.. recording would be easy, then.
I'm not that sure if that fits, from the timing, but might be an interesting experiment. Recording would be a lot easier, then.
Unfortunately, *.wav has all the info at the beginning of the file. The filesize, too :confused: .
Some guys smarter than me may someday add noise-shaping to the 8 bit recordings...
 
That sounds like a really useful addition. I've not soldered up my audio connectors yet, but could probably find a way to test it. Guess you'd only write the header once recordSdWav::stop() was called, or maybe every second? Not often, anyway...

In other news, my Dynamic Audio Objects managed to break your AudioPlayWav, and I think I found a simplification you can make as a result. Consider this fragment in ::update():
Code:
    if (++_AudioPlayWavInstance >= _AudioPlayWavInstances)
        _AudioPlayWavInstance = 0;

    if ( state != STATE_PLAY ) return;

    if ([COLOR="#FF0000"]_AudioPlayWavInstance == my_instance[/COLOR] && buffer_rd == 0 )
This is fine if ::update() is executed for each instance of an AudioPlayWav in exactly the same order that they were created, which is true for the static Audio library. However, my DAO attempts to optimise the update order to (a) minimise the number of audio_block_t items in use, and (b) ensure objects created later but connected early in the signal flow don't cause weird latency effects. This means that objects don't execute in creation order, and thus the check _AudioPlayWavInstance == my_instance fails.

It seems to me (and I've tested this) that actually it's an unnecessary check: because you have already pre-filled the buffer[] by an amount dependent on the instance number, you've already guaranteed* that every instance has to re-fill at a different time. Also, because _AudioPlayWavInstance is incremented N times for an Audio engine cycle, where N is the number of instances of an AudioPlayWav, each instance will always see the same number; if it could be incremented once per Audio engine cycle** then each instance would see a match once every N cycles.

*actually, I don't think you have, because you could pre-load X/Nth of the buffer while paused (X=instance, N=total instances), then un-pause them in turn, with suitably-chosen delays, such that they all need to re-fill on the same cycle! But it's unlikely... And waiting for "your turn" would just result in a drop-out.

**Thinking about incrementing _AudioPlayWavInstance once per Audio engine update, that's hard to do: I think we'd have to add an update_count value to the AudioStream object, which gets incremented before the update chain is run; record a static copy in the class; and then increment _AudioPlayWavInstance and update the copy if the static copy didn't match.

Lots to think about...

Cheers

Jonathan
 
Excellent - pulled and tested, still works :D

It could indeed replace the other players, or the existing ones could simply become a skin on yours - much easier to maintain.

Happy to create a PR, just a bit reluctant if the apparently simple change broke another feature, so seemed easiest to discuss before. Do you prefer contributors to create their own branches, or are you happy to pull straight into "main"?

Cheers

Jonathan
 
I've done some work on pushing the SD card reads out to the foreground code by embedding an EventResponder in each AudioPlayWav object. Not fully tested but it looks massively more efficient (I don't understand why, so that could be an incorrect conclusion!). A single call allows switching back to reading inside update(), if needed; the EventResponder option is vulnerable to yield() not being called.

Looks like you've done a lot of work in the last couple of days, so merging could be ... interesting. Probably better for me to fork and branch your repo, so you can pull it in to look at, prior to considering a merge.

Cheers

Jonathan
 
Could you show me a link where I can see your code?

I'm currently working on µlaw and raw support.
 
Last edited:
Could you show me a link where I can see your code?

I'm currently working on ulaw and raw support.

I made a last commit for today - this changes update(). It now uses a function pointer to call the right "decoder". Currently, there are the normal 8 and 16 bit decoders - more to follow (22, 11 kHz, µlaw)
With the func pointer, it's not needed to select the right decoder in update() - and it's more easy to add additional "decoders".
Futher more, I removed sz_frame (it was just AUDIO_BLOCK_SAMPLES * channels)
 
Last edited:
Could you show me a link where I can see your code?

I'm currently working on µlaw and raw support.
Hi Frank

You can find it at https://github.com/h4yn0nnym0u5e/Teensy-WavePlayer/tree/feature/UseEventResponder. As of 0d7e008 it isn't backward-compatible with existing sketches, because it needs to have yield() called in order for EventResponder to work. Hence:
Code:
  playWav.play(filename);
  while (playWav.isPlaying()) {}
needs to become:
Code:
  playWav.play(filename);
  while (playWav.isPlaying()) {yield();}
It could easily default to the old scheme, as I've put a function in to switch methods (enableEventReading).

Not totally happy with it as-is, because it really needs some sort of awareness of other SD-using audio objects - I think an AudioSD base class may be indicated. But I believe it demonstrates the viability of the concept: I want to try it with more simultaneous files, with different channel counts in each file, just to check its robustness further.

Cheers

Jonathan
 
Back
Top