so i assume the idea is to have a/o come up with classes analogous to "objects" such as [osc~], [phasor~] and the like, which i trust will work just fine but what about, say, objects like [tabwrite~], [delwrite~] or [susloop~] or, well, anything depending on more or less sizeable buffers. with a 16KB RAM, if i see it right, at 16bit 44.1khz stereo everything will be limited to buffer sizes << 0.1 sec, which isn't very much.
Yes, that's correct, the small RAM does limit this to pretty much only real-time stuff without big buffers. Limited reverb might be possible, but certainly any sort of substantial delay just isn't possible without much more memory.
The reality is a good portion of the 16K isn't really available. About 2-3K is needed for USB buffers. I might make a special "serial monitor only" USB type that uses minimal buffering, to free up memory. About 1-4K is needed for audio block buffers. The I2S output object needs 512 bytes dedicated DMA buffer memory.
well, my question then: is writing/reading to SPI/SDcard actually fast enough to compensate? or will all that just be very tricky w/o some fast external memory, which, considering the pin count of such chips, doesn't seem to be an option? also, as i've seen people using spi rather than parallel SRAM with arduino (microchip 23LC1024), i'm guessing the usage (no fsmc) and performance of such chips can't be much superior to microSD?
So far, I haven't tried writing to the SD card. The performance really depends heavily on the card, because not all SD cards are created equal. But I would be surprised if any SD card can give good simultaneous read performance while also writing.
That little 23LC1024 chip looks very interesting. Of course, it's only about 1.4 seconds of audio, but the writing process has no delay, so it should be pretty feasible to build objects like [delwrite~] using it.
It would be really nice to be able to do 90/24 and if not, 48/24 (just because the brickwall antialias filter has a bit more room between 20k and 24k then it does between 20k and 22.05k so can have a better behaved phase response).
48 kHz is looking unlikely, at least using I2S that keeps the same rate synchronous with the processor's clock, because Freescale's circuitry just can't generate a 48 kHz MCLK without massive jitter. Perhaps someone will develop an async I2S object?
Hyple's library has the necessary I2S code. The difficult part is "just" how to integrate it with this not-yet-published audio API stuff. Later this year or early next year, after a least few alpha or beta releases, would be the time to consider that.
On anti-alias filtering, most I2S DACs add interpolated samples, so all the aliasing is at much higher frequencies. One I have my eye on for an "audiophile interface" board is the
Wolfson WM8740, which adds 8X interpolation.
The API will be designed for 16 bit samples. I'm sure the desire for 24 or more bits is going to become a frequently asked question. Maybe next year (or perhaps sooner if anyone really wants to take this on), it might be possible to build a 24 bit version and implement at least some of the objects. But there are 2 pretty big advantages to working with "only" 16 bits. First, RAM is limited. Usually anything bigger than 16 gets stored in 32 bits, which will burn up the limited RAM twice as fast. The second nice advantage of 16 bits is the Cortex-M4's DSP/SIMD instructions (which are designed for pairs of 16 bit integers packed into 32 bit words). At least initially I'm focusing on the system-level design, and in fact I'm just now starting to look into an efficient way to report CPU usage. Later I intend to work on optimizing some of the objects using SIMD instructions.
I'm sure we're going to see people regularly asking for 24 bit audio, thinking it would somehow sound better. Maybe it would? If enough people really, really want this, I'll probably work on it eventually. In the meantime, I'm probably going to adapt the I2S code to use 32 bit words (using only the first 16 bits), partly so at least the capability will be there for anyone who wants to take on 24 bits, but mostly because nearly all DACs show this in the timing diagrams.
A direct digital synthesis oscillator (using the audio ADC for control voltage, including audio-rate control like FM; and the DAC for audio output) is a fairly obvious application here.
I already have a DDS sine wave object. Will soon work on a version where each phase accumulator increment is based on the samples from another audio stream, instead of a constant.
The only thing I might like to see is a pad directly on the line inputs to act as a DC coupled input.
The ability to use the 16 bit analog input for pots or control voltages sounds pretty nice, but I'm not quite sure how it could work in the context of these Codec chips. They're really designed for only AC coupled signals.
The chip runs from a single 3.3 volt supply. Internally, it generates some DC bias voltage. That DC level probably varies with temperature and maybe other factors. You can be pretty sure the level is approximately 1.6 volts, but of course it's not really well known. For AC coupling, the specific DC bias doesn't matter. The chip just weakly couples that DC level to the input, assuming the source has infinite impedance at DC (a series capacitor), and of course the analog circuitry in the chip's sigma-delta modulator uses the same bias level.
There is a pin for filtering the chip's bias voltage. Normally you're supposed to connect a capacitor between that pin and analog ground, but nothing else. That pin may or may not be the actual DC reference for zero. It might be buffered inside the chip before weakly driving those inputs, where the buffer might add an offset voltage (which isn't an error for AC signals if the signals and the modulator inside the chip see the same voltage). Even if that pin is an accurate reference for DC zero, it's probably not a low impedance. Using it without buffering might wreck the ADC's performance.
So while I think it'd be pretty awesome to be able to use the ADC for non-audio DC coupled signals, I just don't see a realistic way for that to work well, in the context of these Codec chips designed only for audio signals. If anyone has any ideas or could point to some projects that have done this successfully, I'm certainly willing to consider it.