Using 4 channel dac chip with audio library

Status
Not open for further replies.

MacroMachines

Well-known member
I am working on a wave table generator with the teensy and was wondering if there is a way to use the audio library without the codec, instead using a 4 channel dac chip that I have? My code works fine at the moment not using the audio library, but I think if I switch to using the audio lib it may prioritize the playback of the wave table so that it can run at audio rate. It also may help in a sort of multithreading with the writes to the oled screen I am using for visual feedback.

So my question is how would I go about using my dac chip along with the teensy audio library?
 
is a way to use the audio library without the codec, instead using a 4 channel dac chip that I have?

Well, there isn't any way to do this easily, because the library doesn't have code for any 4 channel chips. You'd have to write new code specific to your chip.

So my question is how would I go about using my dac chip along with the teensy audio library?

I can only give you a very generic answer to a question so lacking in details, like a part number or even the type of data interface of the chip!

You'll need to start with one of the output objects already in the library, and somehow modify it to send data to your chip.

Here's the background info about how to add objects to the library:

http://www.pjrc.com/teensy/td_libs_AudioNewObjects.html

However, input & output objects are special. The entire library requires at least one input or output object to have "update responsibility". Look at the output objects for those words.

Alternately, you could use one of the other input or output objects to do the update responsibility, even if you don't actually use its input or send it any output data. As long as 1 object takes responsibility for updates, the entire library will run.
 
Well, there isn't any way to do this easily, because the library doesn't have code for any 4 channel chips. You'd have to write new code specific to your chip.

I can only give you a very generic answer to a question so lacking in details, like a part number or even the type of data interface of the chip!

incidentally, i was curious about DAC support, too (DAC8564/5, in my case -- i guess most candidates would be SPI, though.)

i looked at output_dac and the SPI DMA libraries, and i'm slightly out of depth here, which is why progress has been fairly nil, but i was wondering what/how this might/should look in principle; ie doing it as generic as possible so that the library might support a fairly large number of devices. not that it would cover everything, but would it make sense to pass at least some protocol info to the dac object? or what would make sense?

eg.

AudioOutputAnalogSPI dac1(0x10, 2); // channel 1 (cmd, data (bytes))
AudioOutputAnalogSPI dac2(0x12, 2); // channel 2
AudioOutputAnalogSPI dac3(0x14, 2); // channel 3
AudioOutputAnalogSPI dac4(0x16, 2); // channel 3
 
doing it as generic as possible so that the library might support a fairly large number of devices.

I personally find that approach very hard.

I usually focus on getting just one specific thing working well first. I keep an eye towards generalized design, but I focus first and foremost on getting things working. Usually much later, it's easiest to extend and redesign for a 2nd chip, and often a more generalized approach follows.
 
below is my current dac write function, dac does the actual i2c communication and DACout calls dac for each of the 4 channels. I was looking at maybe using the IntervalTimer objects instead of trying to go full on audio library... really what I need is just to have the ability to prioritize my dac writes above my OLED screen writes. The reason I thought of the audio library was basically because it is capable of pretty darn good timing updates and may inherently schedule the dac writes above my OLED screen writes.

would going the IntervalTImer route possibly be a better option?

Ill find the dac part number in a moment, gotta pull apart the prototype to check.

Code:
void dac(byte channel, int value){
  Wire.beginTransmission(B1100000);
  Wire.write(B01000001|((channel%4)<<1));
  Wire.write(B00000000|(value>>8));
  Wire.write(value&255);
  Wire.endTransmission();
}

void DACout(){
  for(int numdacs = 0; numdacs < totalWaves; numdacs++){
    dac(numdacs,wave[numdacs][wavePhase]);}
}
 
also could it potentially help for me to rework the dac function to compile the 4 channels instead of calling multiple times with the modulo?
 
I was looking at maybe using the IntervalTimer objects instead


instead of what? fwiw, if using intervalTimer, you can of course set the interrupt priority level. in the absence of a SPI DAC object for the audio library, that's what i've been doing. ie do everything DDS in an ISR with fairly high priority, somewhat like so:

Code:
void FASTRUN update_OSC() {
  
  uint16_t DAC_out1 = _next_sample(&osc1);
  uint16_t DAC_out2 = _next_sample(&osc2);
  uint16_t DAC_out3 = _next_sample(&osc3);
  uint16_t DAC_out4 = _next_sample(&osc4);

  set8564_CHA(DAC_out1);
  set8564_CHB(DAC_out2);
  set8564_CHC(DAC_out3);
  set8564_CHD(DAC_out4);
}

as you can see, that's using FASTRUN and i've split the write functions into 4, one for each channel (the actual update/LDAC is in set8564_CHD). i don't know whether that makes any real difference. that works fairly well with SPIFIFO (not so much with SPI.h), but (as far as i can tell) the timing/jitter is not quite as nice as when using the audio library (that is, DMA/internal DAC).

also take a look here perhaps
 
The reason I thought of the audio library was basically because it is capable of pretty darn good timing updates and may inherently schedule the dac writes above my OLED screen writes.

The only reason the audio library has good timing is because the 5 input & output objects that currently exist implement good timing using the PDB timer or FTM timer or the I2S port, for a sample rate of exactly 48 MHz divided by 1088.

Those precise sample rates feed DMA channels, which interrupt every 64 samples, and every other interrupt is used to trigger the library update. When more than 1 input or output is used, only 1 of them as "update responsibility". See the code in any of those 5 objects for details.

IntervalTimer will suffer some jitter. You can raise the timers priority to solve most of it. Here's more detailed info about ways to deal with IntervalTimer jitter:

https://forum.pjrc.com/threads/27690-IntervalTimer-is-not-precise?p=64142&viewfull=1#post64142
 
Jitter isn't too much of an issue on this project, mainly just want to get the highest possible rate, if possible up into audio rate. I'm basically finding anything I can to make the dac writes as fast as possible. I'm wondering if switching to an spi dac would help. Currently my OLED is on hardware spi and it goes blazing fast, but the i2c dac is not quite as fast as I want it to be, and I notice when I do full screen updates to the oled it takes up a decent few milliseconds or so, causing audible sample and hold style artifacts. Is there a way I can post a video here to show the project? It's a eurorack module for drawing breakpoint envelopes.
 
doing it the other way round (SPI DAC, i2c OLED) i'd guess might have been the better choice. SPI at any rate is much faster than i2c.

bracketing anything oled, with SPI, i can easily do 4 channels at 60+ kHz (the project being fairly similar, i think).
 
oh, sorry, non-native speaker here.

i meant "not factoring in". the oleds i use are SPI, too, so 60kHz is unlikely to work in combination with the oled. (i haven't tried; the oled thing i made is for pitch type signals, so update rates don't really matter.)
 
Ok, I received a 12 bit quad SPI dac, and am going to swap it into the prototype shortly.

My only other main optimization is finding a way to pre-calculate a lookup table for curvature, currently using the fscale function provided on the arduino site, and it is very heavy and slow. I am wondering what might be a good route to make an adaptive lookup table to allow easy scaling and offset (the length of the curve in words can be anywhere from 1 to 1024, and can be offset and scaled horizontally as well)

I may post another thread about this, as it isn't as relevant to the current thread title. I will also submit some video clip examples and code snippets that show a bit more what I am doing.
 
So after a year of heavy work and learning this whole process from scratch, I am about to go into production. (my background is as an animator www.axiom-crux.net and I decided to start a synth company because I had many ideas I think will be useful and fun for people)

I will post some more info directly on a new thread as I do my rounds promoting its release. There are some work in progress clips on my instagram:
https://instagram.com/p/9fJPC6Bysg/?taken-by=axiomcrux

I am finally at the level I think I understand enough to be able to add a few new audio library objects. I plan to begin looking into this more as soon as possible, thank you guys for your responses, they are indeed quite helpful :)

keep an eye out for my updates if you are interested, I should be posting alot to help promote the release and raise funds for manufacturing. www.macromachines.net
 
Status
Not open for further replies.
Back
Top