Granular Synthesis with Teensy and Audio Adaptor

Status
Not open for further replies.
i suppose/hope the recent SPIRAM / "memoryboard" add-ons should be useful for this kind of stuff.

i started playing around a bit with effect_delay_ext, trying to turn it into a granulator object; will have to see how that goes. it'll require lots of access per update, obviously, even more when interpolating the samples.

HAHA like minded indeed.. so yeah I just spent an hour looking at the delay object code with the intent of doing just this. I would love to collaborate with you on that granular delay line object if you are willing to post or send me what you have done thus far? I have made so many max and puredata granular patches from scratch and I can confidently say I have a full understanding of all granular techniques (and I have a couple favorites that are low cpu and sound amazing). I just am learning the ropes of navigating library design and C++ Object creation.
 
HAHA like minded indeed.. so yeah I just spent an hour looking at the delay object code with the intent of doing just this. I would love to collaborate with you on that granular delay line object if you are willing to post or send me what you have done thus far? I have made so many max and puredata granular patches from scratch and I can confidently say I have a full understanding of all granular techniques (and I have a couple favorites that are low cpu and sound amazing). I just am learning the ropes of navigating library design and C++ Object creation.

I would love to cooperate with you, I can upload all I have to Github. Do you have any of your patches available online?
 
I'm curious to see how this develops. I don't have much (well, really any) experience with granular synthesis. But the request keeps coming up to support this in the audio library. I might be able to help a bit on the Teensy coding side, but for this to come together well anytime soon, it really depends on you guys for the knowledge and experience and testing of the granular synthesis side....
 
I'm curious to see how this develops. I don't have much (well, really any) experience with granular synthesis. But the request keeps coming up to support this in the audio library. I might be able to help a bit on the Teensy coding side, but for this to come together well anytime soon, it really depends on you guys for the knowledge and experience and testing of the granular synthesis side....

I am trying to think of the best way to structure a flexible granular object for this:
at the core, a grain is a small section of an audio file through a window
(I believe you have a hanning array in the FFT already? those work great)
from there you have the option to add randomness into the buffer read position offset of each grain.
parameters for grain length, number of grains, either trigger the grains to play at random times for a very "grainy" grainular
or use a phase accumulator to drive through the windows and evenly spaced grains
which would be an easy modification/merger of the wavetable oscillator object and file playback objects. I had started to attempt this with what is already available using the queue into a larger buffer that a few wavetables play back through, but 256 is not quite enough samples. This approach is more smooth and allows for extremely clean time scrubbing when there are 4-6 grains evenly spaced along the phase ramp.
if you have puredata, or are willing to download it, I can upload one of my patches with comments showing my favorite grain styles.
for now here is a link to a nice pure data tutorial on how to build a basic phase ramp driven grain method to get some ideas:
http://pd-tutorial.com/english/ch03s07.html

if you have any questions or want me to post some additional media I will gladly do so, this would be a RAD teensy object that I am currently trying my best to grow the skill to create, but I am sure someone with more experience would be able to do it cleaner/faster.
 
Here is the max "chopper" code I was trying to port:

Code:
/*
	Waveset chopper / repeater

	This program divides the input into segments, and plays these segments back.
	It could be seen as a time-domain, granular form of analysis/resynthesis.
	
	The program contains a recording section, 
		which stores grains into a Data object (segment_data)
	and a playback section,
		which selects and plays these grains one-by-one
	
	The grains are not enveloped; 
		instead the input is segmented at points where the signal is rising and crosses zero
		
	A positive zero-crossing means that: 
		a: previous sample is less than zero
		b: next sample is greater than zero
	
	For pure sounds a segment corresponds to one or more wavecycles, 
		but for complex sounds it can be somewhat stochastic.
	
	RECORDING:
		
	Since waveforms rarely cross zero at an exact sample location, 
		the actual crossing is somewhere between a and b.
		The program estimates this sub-sample crossing phase (and stores it in offset_data)
		It also stores the sub-sample accurate segment length (in length_data)
		
	The segment_data recorded includes the sample just before the first crossing,
		and the sample just after the last, in order to contain both actual crossings.
		I.e. each captured segment looks like [a1, b1, ... b2 a2 ... a3, b3]
	
	When a segment finishes recording, a new segment is chosen to write into (write_segment)
	
	PLAYBACK:
	
	The playback section is continuously playing a segment (play_segment)
		
	Playback includes additional calculations,
 		to ensure the sub-sample phase offset is used and retained between segments

	When the segment playback is done (possibly after several repeats),
		a new segment is selected according to the current strategy (play_mode)
		
		
	Graham Wakefield 2012
*/


// the segment storage (each segment on its own channel):
Data segment_data(10004, 64);
// the length of each segment (in samples):
Data length_data(64, 1);
// each segment is also offset slightly (sub-sample phase delay):
Data offset_data(64, 1);
// each segment also stores it's average energy (root-mean square):
Data rms_data(64, 1);

// set to zero to disable new segment capture:
Param capture(1, min=0, max=1);
// how many zero crossings per segment:
Param crossings(1, min=1);
// the minimum & maximum length of a segment:
Param max_length(10000, min=16, max=10000);
Param min_length(100, min=16, max=10000);
// how many times a segment is played back:
Param repeats(1, min=1);
// hold the current playback segment:
Param hold(0, min=0, max=1);
// choose the strategy to play back grains:
Param playmode(0, min=0, max=4);
// choose how to select playback rates/pitches:
Param pitchedmode(0, min=0, max=4);
// playback frequency for pitchedmode enabled:
Param freq(220, min=0);
// playback rate for pitchedmode not enabled:
Param rate(1, min=0);

// the segment currently being written to:
History write_segment(1);
// the number of samples since the last capture:
History write_index(0);
// the number of rising zero-crossings since the last capture:
History crossing_count(0);

// the segment currently being played:
History play_segment(0);
// the sample index of playback:
History play_index(0);
// the length of the playing segment:
History play_len(0);
// the offset of the playing segment:
History play_offset(0);
// the loudness of the playing segment:
History play_rms(0.1);
// used to create smooth overlaps
History prev_input;
// used to accumulate the segment energy total:
History energy_sum;

// the total length of all segments
History total_length;

// the number of segments:
num_segments = channels(segment_data);



// RECORDING SECTION:

// DC blocking filter used to remove bias in the input:
unbiased_input = dcblock(in1); 
// accumulate energy:
energy_sum = energy_sum + unbiased_input*unbiased_input;

// update write index:
write_index = write_index + 1;
// always write input into current segment:
poke(segment_data, unbiased_input, write_index, write_segment);

// detect rising zero-crossing: 
is_crossing = change(unbiased_input > 0) > 0;
// capture behavior is triggered on the rising zero-crossing:
if (is_crossing) {
	
	// if the segment is too long, 
	if (write_index > max_length) {
		// reset the counters		
		crossing_count = 0;
		write_index = 0;	
		
	} else {
		// count rising zero-crossings in this segment:
		crossing_count = crossing_count + 1;
			
		// decide whether the segment is complete:
		// only when capture is enabled
		// only when enough zero-crossings have occurred
		// only when enough samples have elapsed
		// only when not too many samples have elapsed
		is_complete = (capture 
			&& crossing_count >= crossings
			&& write_index >= min_length);
		if (is_complete) {	
			
			// at what theoretical sample index did it cross?
			// estimate as linear intersection:
			offset = prev_input / (prev_input - unbiased_input);
			
			// compare the previous offset:
			prev_offset = peek(offset_data, write_segment, 0);
			
			// store segment length:
			// adjusted for the fractional component
			// minus one for the extra wrapping sample (a,b,...b,a,...,a,b)
			len = write_index + offset - prev_offset - 1;
			// update total length:
			prev_length = peek(length_data, write_segment, 0);
			total_length = total_length - prev_length + len;
			// store new length:
			poke(length_data, len, write_segment, 0);
			
			// store segment energy:
			// (root mean square, over number of samples measured)
			rms = sqrt(energy_sum / floor(len));
			poke(rms_data, rms, write_segment, 0);
			
			// reset counters:
			crossing_count = 0;
			energy_sum = 0;
			
			// switch to a new segment:
			write_segment = wrap(write_segment + 1, 0, num_segments);
			// don't write into what is currently playing:
			if (write_segment == play_segment) {
				write_segment = wrap(write_segment + 1, 0, num_segments);
			}
			
			// store the new offset:
			poke(offset_data, offset, write_segment, 0);
			
			// write the previous & current (a,b) into the new segment:
			poke(segment_data, prev_input, 0, write_segment);
			poke(segment_data, unbiased_input, 1, write_segment);
			write_index = 1;
		} 
	}
}

// remember previous input:
prev_input = unbiased_input;



// PLAYBACK SECTION:

r = rate;
// update playback index:
if (pitchedmode < 1) {
	// no change
	
} else if (pitchedmode < 2) {	
	// ascending:
	d = play_index / play_len;
	r = rate * max(1, d);
	
} else if (pitchedmode < 3) {
	// descending:
	d = ceil(play_index / play_len);
	r = rate / max(1, d*d);
	
} else {
	// try to play back at a chosen frequency
	// (compensating for estimated original sample frequency)
	r = freq * play_len / (samplerate * crossings);
}
// update playback index:
play_index = play_index + r;
// actual play index needs to stay within len:
// (can be fun to use wrap, fold or clip here)
actual_play_index = wrap(play_index, 0, play_len);

// play the current segment waveform:
// (offset by the waveform zero-crossing position)
out1 = peek(segment_data, play_offset + actual_play_index, play_segment, interp="linear");

// switch to a new segment?
if (play_index >= play_len * floor(repeats)) {
	
	// reset to the current actual play position
	play_index = actual_play_index;
	
	if (!hold) {
		// move to a new segment
		// some alternatives... 
		if (playmode < 1) {
			
			// play in forward sequence
			play_segment = wrap(play_segment + 1, 0, num_segments);
			// caveat: don't play what is currently being written:
			if (write_segment == play_segment) {
				play_segment = wrap(write_segment + 1, 0, num_segments);
			}
			
		} else if (playmode < 2) {
			
			// play in reverse sequence
			play_segment = wrap(play_segment - 1, 0, num_segments);
			// caveat: don't play what is currently being written:
			if (write_segment == play_segment) {
				play_segment = wrap(write_segment - 1, 0, num_segments);
			}
			
		} else if (playmode < 3) {
			
			// choose direction by random walk:
			direction = sign(noise());
			play_segment = wrap(play_segment + direction, 0, num_segments);
			// caveat: don't play what is currently being written:
			if (write_segment == play_segment) {
				play_segment = wrap(write_segment + direction, 0, num_segments);
			}
			
		} else if (playmode < 4) {
			
			// choose randomly:
			direction = 1 + ceil(num_segments * (noise() + 1)/2);
			play_segment = wrap(play_segment + direction, 0, num_segments);
			// caveat: don't play what is currently being written:
			if (write_segment == play_segment) {
				play_segment = wrap(write_segment - 1, 0, num_segments);
			}
			
		} else {
			
			// play most recently recorded:
			play_segment = wrap(write_segment - 1, 0, num_segments);
			
		} 
		
		// get the new playback length
		play_len = peek(length_data, play_segment, 0);
		// get the new playback offset
		play_offset = peek(offset_data, play_segment, 0);
		// and the new playback loudness
		play_rms = peek(rms_data, play_segment, 0);
	}	
} 

// show what's actually happening:
out2 = write_segment;
out3 = play_segment;
out4 = play_len;
out5 = play_index / play_len;
out6 = play_rms;
out7 = total_length;
 
There's an important distinction between analysis of live input to grains versus simply playing already-prepared grains.

Being able to play them first would be the sane path....
 
what i propose is a simpler version of the above with:

recording into a ram buffer, like delay line but can be frozen to keep from updating
creates a map of zero crossing points of valid "wavelets", must cross zero twice so there is a full wave cycle
playback of a selected wavelet looping like a wavetable oscillator with pitch ratio using linear interpolation
selection of current wavelet waits until it is finished with the active playing wavelet to update, so there is never a click because they all happen at zero cross
 
I was under the impression the limitations of playback from spi flash or especially sd card make it unrealistic to play back grains from them reliably at the speed needed for smooth audio.
 
SD cards are too slow.

SPI flash or SPI ram is limited to bandwidth of a couple megabytes/sec. That's plenty fast enough for several overlapping samples/grains, but not dozens of them.
 
if you prefer, here is an outline for simple granular playback from a file:

grain.playGrain(offset, length, pitch)
skip to a offset in the file to be played back
play until length, amplitude is multiplied by a window
pitch ratio : 1 is normal, 2 is twice the pitch, 0.5 is half etc..

grain.startStream(rate, rateVariation)
begins playing grains in a continuous stream, rate defines the ms interval to trigger each grain
rateVariation sets the amount of randomness for each trigger interval

grain.Variation(offsetVariation, lengthVariation, pitchVariation)
these settings add random variation into the grain stream set in ratio 0. to 1. values
0 is no variation and 1.0 is maximum variation

grain.Settings(polyphony, window);
set the possible polyphony for how many grains can be played in the stream
set the choice for window option (hanning, blackman, hamming, etc..)
 
Last edited:
For my prototype 2 files simultaneously are just fine to start with. I made a little example where I can loop from the SD card and set the length and position of the loop with two encoders. It also contains the PlayLoopSdRaw I wrote about earlier. The loop length and position are both set in blocks, this was the easiest way for me to implement. The same is true for why it only reads 256 bytes from the SD card per update.

I did not check if it still works when two files are played simultaneously.

I am checking out your looper, sounds pretty good so far. I am going to see if I can make some kind of progress with it, keep me updated if you do as well! I tried playing the 4 objects with different settings for length to see what it would be like for a sort of granular and its not too bad actually, just needs a window for pop removal. maybe I can find a way to just do a quick fade in and fade out to window.

I am thinking that maybe the main play object should just be updated in the library with a "loop enable" option of some sort, as well as the start and end position settings. It only makes the object more flexible, no need for a separate loop object to just get these useful additions. I will see if I can do this sort of merger of the files and post for consideration to update library.
 
Last edited:
I would love to cooperate with you, I can upload all I have to Github. Do you have any of your patches available online?

I don't actually have my patches up online, but I should change that soon. I have too many really good useful ones collecting dust on my drive. I always had the mentality of "Ill post it when its done" but then I realized its never done. only took 15 years to realize it. github would be a good place so I will do a github post soon.
 
is there any way to get data into PROGMEM other then by using a utility to create a c file and copy into the code? like is it at all possible to use it in realtime to copy audio data into it?
 
grain.playGrain(offset, length, pitch)
....
grain.startStream(rate, rateVariation)
...
grain.Variation(offsetVariation, lengthVariation, pitchVariation)
...
grain.Settings(polyphony, window);

Any chance you could build code in C or Python (which runs on a PC) that implements these using data read from binary files and writes the 16 bit binary data to another file? It doesn't need to be efficient. The idea is to solidify the details of the algorithm, and to generate known-good test data which can be used to verify an optimized Teensy version.
 
Any chance you could build code in C or Python (which runs on a PC) that implements these using data read from binary files and writes the 16 bit binary data to another file? It doesn't need to be efficient. The idea is to solidify the details of the algorithm, and to generate known-good test data which can be used to verify an optimized Teensy version.

granular algorithm design is more of an art then science. I can certainly make / give you a pure data patch, max patch, or even simple gen~ code that does this, and I can even use the gen~ c++ compiler to give you C++ that would work somewhere. In fact if you want good c++ code and a library that you should really consider porting, check out the STK synthesis tool kit library. It is open source and has everything from nice physical models to band limited waveforms, to granular. and it is all designed to be portable and uses a generic "tick" based timing that is flexible, you just call a function to get the next sample.
https://ccrma.stanford.edu/software/stk/classstk_1_1Granulate.html
 
so this is interesting, if I set the clock to 96mhz I get cpu timing like this from the 4 loopers playing:
current , max current, max
all=2.09,73.67 Memory: 4,7
all=2.09,73.67 Memory: 4,7
all=2.09,73.67 Memory: 4,7
all=2.09,73.67 Memory: 4,7
all=15.39,73.67 Memory: 4,7
all=15.39,73.67 Memory: 4,7
all=2.09,73.67 Memory: 4,7


72MHZ
all=64.00,102.34 Memory: 4,7
all=64.00,102.34 Memory: 4,7
all=64.00,102.34 Memory: 4,7
all=64.00,102.34 Memory: 4,7
all=64.00,102.34 Memory: 4,7
all=64.00,102.34 Memory: 4,7
all=64.00,102.34 Memory: 4,7
all=35.07,102.34 Memory: 4,7
all=35.07,102.34 Memory: 4,7
all=35.07,102.34 Memory: 4,7
all=36.04,102.34 Memory: 4,7



looks like more then double the cpu usage at some points, also it seems to top out after a second of starting to play and then settle. I am playing back from SD, and astonished that 4 looping grains reading direct can perform this well. I wonder what it would be like if it were SPI flash or RAM??!
 
The idea is to solidify the details of the algorithm, and to generate known-good test data which can be used to verify an optimized Teensy version.

also if you are trying to compare the granular test data, it would be very difficult since a bit part of granular is the random variation and I am not sure how you would compare two things that are random for accuracy.
 
can you fry an sd card by writing and reading too fast? I tried having it rerecord the audio file being granulated and it didn't like something about the way I did it, and now the card doesn't seem to work. :/
 
disk utility repaired it, interesting glitch worth noting.
here is the repair info:
Code:
Verifying volume “TEENSYAUDIO”Verifying file system.** /dev/rdisk4s1
** Phase 1 - Preparing FAT
FAT[0] is incorrect (is 0x8501B4; should be 0xFFFFFF8)
Correct? no
FAT[1] is incorrect
Correct? no
** Phase 2 - Checking Directories
/: Cluster chain starting at 2 continues with cluster out of range (260702003)
Truncate? no
/POOL.RAW: Cluster chain starting at 5 continues with cluster out of range (70910920)
Truncate? no
size of /POOL.RAW is 256000, should at most be 4096
Truncate? no
/ has entries after end of directory
Truncate? no
Extend? no
/ has entries after end of directory
Truncate? no
Extend? no
/ has entries after end of directory
Truncate? no
Extend? no
/ has entries after end of directory
Truncate? no
Extend? no
/ has entries after end of directory
Truncate? no
Extend? no
/ has entries after end of directory
Truncate? no
Extend? no
/ has entries after end of directory
Truncate? no
Extend? no
/ has entries after end of directory
Truncate? no
Extend? no
** Phase 3 - Checking for Orphan Clusters
Found orphan cluster(s)
Fix? no
Found 341 orphaned clusters
Free space in FSInfo block (478480) not correct (478640)
Fix? no
5 files, 1914560 KiB free (478640 clusters)
File system check exit code is 8.Error: This disk needs to be repaired. Click Repair Disk.
Verify and Repair volume “TEENSYAUDIO”Repairing file system.** /dev/rdisk4s1
** Phase 1 - Preparing FAT
** Phase 2 - Checking Directories
** Phase 3 - Checking for Orphan Clusters
82 files, 1915216 KiB free (478804 clusters)
File system check exit code is 0.Updating boot support partitions for the volume as required.
 
I wonder what it would be like if it were SPI flash or RAM??!

Much better.

The SPI peripheral runs at 36 MHz when Teensy runs at at 72, but it's 48 when Teensy is at 96 MHz. Since SPI bandwidth is the limiting factor with those chips, you'll see the performance change. Likewise if you edit boards.txt to enable 120 MHz overclock!

can you fry an sd card by writing and reading too fast?

Until several months ago, I would have thought not. In theory, it should be infinitely readable, and wear leveling should spread the relatively slow write speeds you can achieve over SPI across a very large space.

But while testing the optimized SD code, I had a test sketch which played 4 short sound clips. I had it running in a loop, with all 4 overlapping, starting about 80 ms after each other. I left it running for a day, as a sort of stress test. One of the cheap Chinese cards (labeled Sandisk, but obviously a counterfeit) stopped working after many hours. That was with my optimized version, which can't ever write anything to the card.
 
HAHA like minded indeed.. so yeah I just spent an hour looking at the delay object code with the intent of doing just this. I would love to collaborate with you on that granular delay line object if you are willing to post or send me what you have done thus far? I have made so many max and puredata granular patches from scratch and I can confidently say I have a full understanding of all granular techniques (and I have a couple favorites that are low cpu and sound amazing). I just am learning the ropes of navigating library design and C++ Object creation.

i wasn't very persistent with the external delay code, i'm afraid. (or well, i got distracted with some other piece of hardware, with (lots) more RAM). that said, i still have the teensy codec/SRAM/SPI flash hardware, so wouldn't mind helping doing something with it.

granular player with the flash, or granular delay line with SRAM, both seems most feasible/worth exploring to me; i suspect things won't sound all that granular, but some interesting effects might result ..
 
i wasn't very persistent with the external delay code, i'm afraid. (or well, i got distracted with some other piece of hardware, with (lots) more RAM). that said, i still have the teensy codec/SRAM/SPI flash hardware, so wouldn't mind helping doing something with it.

granular player with the flash, or granular delay line with SRAM, both seems most feasible/worth exploring to me; i suspect things won't sound all that granular, but some interesting effects might result ..


I am using recter's loop playback to play 4 different looping "grains' from randomized position and loop sizes in the SD card raw file that is recorded and it sounds VERY granular. I was shocked actually. I am reworking things a bit and making my own grain playback engine which I will be showing at NAMM in a couple days :D

may I ask what your other piece of hardware with more RAM is? I want to look into that for some future projects, as RAM is my main limitation right now. Even so I am making things I never thought possible with the teensy and an SD card. perpetually quite impressed with this device and the audio library.
 
I am using recter's loop playback to play 4 different looping "grains' from randomized position and loop sizes in the SD card raw file that is recorded and it sounds VERY granular. I was shocked actually. I am reworking things a bit and making my own grain playback engine which I will be showing at NAMM in a couple days :D

sure, i suppose i was thinking granular as in "thousands of very short sonic grains" .. anyways, this should be easy enough to port to the SPI flash stuff, no? that should give you 10+ grains. the only issue with that sort of set-up i can foresee is moving the files on there in the first place (in terms of user experience), it taking forever. then again, even the 16MB ones, they'll fit 5 minutes of mono, which is plenty to play with.

may I ask what your other piece of hardware with more RAM is? I want to look into that for some future projects, as RAM is my main limitation right now. Even so I am making things I never thought possible with the teensy and an SD card. perpetually quite impressed with this device and the audio library.

sure. nothing mysterious. i've put together an owl-like thing, STM32F4 with 1MB extra SRAM; still figuring that one out. all this RAM stuff seems to be 54-pin TSSOP (at best), so not suitable for teensies, unfortunately. the other, much easier thing to do is running, say, pd on something like this. i've recently swapped the pcm5102a for a wm8731; works pretty ok. current draw isn't too bad either (with zero, or A+), though there's fairly palpable limits, too.
 
sure, i suppose i was thinking granular as in "thousands of very short sonic grains" .. anyways, this should be easy enough to port to the SPI flash stuff, no? that should give you 10+ grains. the only issue with that sort of set-up i can foresee is moving the files on there in the first place (in terms of user experience), it taking forever. then again, even the 16MB ones, they'll fit 5 minutes of mono, which is plenty to play with.

Ive read Curtis Roads "Microsound", I highly recommend his "Computer Music Tutorial" book. It is a huge (literally, physically) resource with sections detailing every kind of synthesis and effect process, and how to go about coding them. The "thousands of very short sonic grains" is actually what is taking place in my test of using the 4 looper players playing very short randomized start and length loops, it equates to 4 times 20-100ms or so bits of audio per second so potentially up to a hundred or so maybe. Rarely do I ever use granular that ends up making thousands of grains per second, that is just a noise wash or chaos. It is more useful to do something like the SOGS Smooth Overlap Granular Synthesis method from IRCAM's max/msp library. That is the most amazing granular algorithm I have found in my many years of searching and testing. It is simple, clean, and allows smooth scrubbing of the position that results in a quality of time independent playback keeping the character of the original sound intact. That is quite rare to find. Ive dissected and remade the SOGS~ to a decent level in puredata and max/msp.

i've put together an owl-like thing, STM32F4 with 1MB extra SRAM; still figuring that one out. all this RAM stuff seems to be 54-pin TSSOP (at best), so not suitable for teensies, unfortunately. the other, much easier thing to do is running, say, pd on something like this. i've recently swapped the pcm5102a for a wm8731; works pretty ok. current draw isn't too bad either (with zero, or A+), though there's fairly palpable limits, too.

RE: the owl, I would love to learn more about this and learn from/with you on that. I have a STM32F407 discovery board that is just sitting around as I had taken to the teensy for my first digital audio projects. Would that work? I looked at the OWL and I hadn't realized it was opensource, pretty cool! I just don't know where to start with moving from arduino which I code mostly in sublime text. What IDE should I get etc..

RE: Pi based audio: I tried making something like that a year and a half ago, I think around when you were doing your's, I wasn't happy with the quality/power and couldn't find any good way to get quality audio I/O outside using a usb soundcard. Were you able to connect a DAC/codec to the PI for audio I/O? I am considering getting back to the embedded linux world and evaluating the pi zero and 4d displays with built in soc. I also thought maybe using libpd to run the patches rather then puredata might somehow work better. Something about it though doesn't get me as excited as I would hope, I want something better. My dream is to create a new opensource Nord Modular G2 style platform. There are a couple cool things coming out in that realm but nothing to the depth and playful quick creative patching I want.
 
Last edited:
Status
Not open for further replies.
Back
Top