A beginner's question about block reads

Status
Not open for further replies.

rforbes

New member
Hello all,

I have just ordered a Teensy with Audio Board and am excited about programming with the Audio Library. This is my first foray into real-time programming, but I have done quite a bit of other programming. I have a couple of questions:

In looking at the explanation of AudioEffectTemplate, it seems that the audio data is read in blocks of 128 samples (by default). Naively, I had expected that real-time programs would read one sample at a time to minimize latency. I'm familiar with block reads/writes being useful in non-real-time systems because of latency issues requiring buffering to preserve an uninterrupted output stream--or with peripheral controllers that are block-oriented. But since we're in real-time here with A/D and D/A, what is the primary motivation for block I/O? Is there a lot of overhead per read?

Also, looking at some of the implementations it seems that if an audio block is required just for the duration of a method, it is dynamically allocated and freed at the end of the method rather than declaring it inside the method as a local(i.e. allocating on the stack). Is that just for tracking purposes, or is there some reason you can't or shouldn't use the stack for moderately-sized data? Is the stack defined in a limited memory space?

Thanks,

Rob
 
The very short answer is yes, there would be too much overhead to process audio in blocks of 1 sample at a time, performance would be very poor. There is a direct tradeoff between latency and performance. Nearly all real-time audio systems work this way on CPU-based processing. ASICs and and FPGAs do not execute code, they have digital circuits so they often process one sample at a time for basic stuff like filtering. Certain operations like STFTs must be done in blocks, it's the nature of the operator.

Paul has written a very good overview of how memory works on the Teensy 4 (including stack and heap) here.

Audio blocks that will only be used within a given AudioEffect (ie. hold temporary data) can use the stack if you wish. A couple hundred bytes of data for a buffer isn't likely going to blow your stack. But obviously audio to be shared with other AudioBlocks (i.e. your input and output audio) must be on the heap. These buffers are part of a static pool created by the Teensy Audio Library upon initialization.
 
Indeed blocks are used for efficiency.

Also, looking at some of the implementations it seems that if an audio block is required just for the duration of a method, it is dynamically allocated and freed at the end of the method rather than declaring it inside the method as a local(i.e. allocating on the stack). Is that just for tracking purposes, or is there some reason you can't or shouldn't use the stack for moderately-sized data? Is the stack defined in a limited memory space?

The "patch cord" connections between audio processing objects are implemented using shared reference counted copy-on-write blocks. Usually the last 2 functions are transmit() and release(). Typically transmit will at least increment the reference count. So normally calling release() doesn't actually free the block, because it's also referenced by whatever other object will receive it. A call to release() only frees the block back to the buffer pool when there are no more references.

Obviously a stack-allocated block wouldn't work for either input or output to the rest of the library. When implementing an audio object, if you need a temporary buffer, you could use the stack or an allocated block. But that's not commonly done. My guess is you're seeing code which does allocate & release and assuming that must be for a temporary buffer if you don't understand transmit() also keeps a reference to the block and release() doesn't really free it. At least not until no more references exist, which commonly happens in N-to-1 objects where only 1 of the received blocks will be transmitted, and in the hardware output objects which receive blocks from the rest of the library and send them to the outside world.

The system is designed so you would not usually worry about such details. You get a pointer to a block by allocating or receiving. You must call release() for every block you get from those functions. Before release(), you may transmit the block. The underlying AudioStream class code handles the actual reference counting and management of the blocks.
 
Status
Not open for further replies.
Back
Top