Audio Library DSP object development : Faust / Max/Msp Gen~ / Pure Data libPD / STK

Status
Not open for further replies.

MacroMachines

Well-known member
I am putting a fair amount of R&D time into finding a workflow for creating DSP objects to use with the audio library, right now FAUST seems like a decent option:
http://faust.grame.fr/about/

they have compilation support for many platforms and I don't think it would be much to ask them to add a teensy audio option. The objective would be to allow rapid development of new objects to add into the teensy audio library, and to allow individuals who wish to explore algorithm design on a slightly deeper level then the audio library.

alternately libPD could be interesting to port, if that is possible.. to allow slightly more complex visual audio network programming. I am checking out "heavy" which the owl pedal/module linked for converting PD.. this could be useful:
https://enzienaudio.com/

Max 7 has the Gen~ object which can generate C++ code, I am also looking into what it would take to adapt this into the teensy audio framework.

Also the STK library seems like it could be a somewhat easy port, and has loads of interesting "stuff"
https://ccrma.stanford.edu/software/stk/
 
As software developer my experience in the last years was, that upcoming software developers had not been taught to cide manually, how to create good and clean structured software by their own. At least as "the glorius three" overrun the developers world with UML, graphical modelling automatic code generating etc. This could have been a great benefit, if a strong standard would have been defined, which would be continously maintained by the community and its needs. But because of interest conflicts of big companies and high license fees, only the big development companies embraced the new methods - in the mainstream, there popped up new solutions on every corner, mostly open source, mostly community driven - but also not a new backbone of software development. And the paradigms also changed nearly weekly, one came, another dried out - how many ways of "agile development" are there in the wild? So the end is more confusion, more diversity and chaos than before.

And if you are looking on the descriptions these frameworks, they promise better, easy and simple development - but in the most cases, these are generic promises without any useable information - marketing speech as its bedt. But when I try to use the framework according the promises, in the most cases, it went in another direction than I was going with my project - the "solution" won't fit the problem at all. So I have two possibilities: if I am forced to use the framework, I have to adapt all my work on it - no great relief in terms of software development. Or I see the proposed solution will cover only a small percentage of my system, the best will be to move on, especially when there are systems in productive use already. And its also an experience that a lot of new frameworks, coming "standards" have started with a loud "Tadaaaah!" - and after a while, it dries out and dies silently ... who knows the names and count the frameworks which promised the reach of the developers "El Dorado", but instead we continue to walk in our old boots - perhaps with new shoelaces, but that's it.

Excuse my lamentation, but always when I read "oh we should integrate xyz in this framework" or "its very easy to zse thar framework", my alarm bells are ringing and I need to clarify my position, even if I'm only a contributor to the project.

So what interests me at most: can you explain with concrete examples, how the Teensy libraries will fit in these frameworks and where will be the relief and benefit for the developers. And please no generic speech - become specific, this would help a lot.
 
Excuse my lamentation, but always when I read "oh we should integrate xyz in this framework" or "its very easy to zse thar framework", my alarm bells are ringing and I need to clarify my position, even if I'm only a contributor to the project.

So what interests me at most: can you explain with concrete examples, how the Teensy libraries will fit in these frameworks and where will be the relief and benefit for the developers. And please no generic speech - become specific, this would help a lot.


i wouldn't be quite so negative. it's not like any of these things is framework "xyz", but fairly venerable audio software/APIs. i don't think the question is how do the teensy libraries fit into these frameworks (teensy audio "objects" being somewhat modelled on pd, after all), but the other way round; ie, how would the teensy audio API benefit? A: from many man-years of work on dsp/synthesis.

here's a concrete example of how that might look in practice, making use of above-mentioned PD compiler, taking an _entire_ pd patch (not object) so it'll run on STM32F4 : http://hoxtonowl.com/wiki/Use_the_PD_online_Compiler (it works fairly well.)


that said, it's a myth that PD or max or anything makes dsp any easier. and AFAICS, all this won't generate fixed-point code, let alone optimized for MK20; so it won't be of any use until a teensy 3++ with hardware FPU comes around.
 
One thing I see: if we may use a graphical library aside the used tools, which allows us to cinfigure projects, we've defined via the Audio GUI (I cut a corner here - the full name is too long) over a runtine GUI, resembling the miniDSP PlugIns. E.g. we wan't to change some biQuad coefficients, this is just a function call, but if we could see at first, which coefficients are active and which filter curve will this define (and what is the expected curve after the change) - ok, this could be a benefit.
 
As software developer my experience in the last years was, that upcoming software developers had not been taught to cide manually, how to create good and clean structured software by their own.

This is not to somehow take the cheap route, it is to take the optimal route.. These platforms allow for realtime programming and refinement. Instead of having to spend minutes editing a ".h" and ".c" file and then modify the arduino sketch and then upload and etc... To test a change in algorithm : these platforms allow instant recompiling and intuitive visual editing. I am attempting to find a way to extend the functionality of the teensy audio library with optimized DSP development techniques.

Tell me you wouldn't love it if we instantly had 25 flavors of analog style filters, tube emulation, resonant physical models, etc.. Optimized, Refined and clean sounding good on this platform in a days work? That is possible with Faust or maxmsp gen~. In that, we open the door to a sound algorithm design centric community with wisdom. I'm trying to build a bridge from the pro DSP synth world to the teensy/arduino world, cause damn I have stuff I want to make!

In light of this and ref what mxxx said, I think this may be another means to hopefully push growth of the hardware as well, I would love a teensy HD, with fpu and whatever other things might be useful. In fact it would be pretty awesome to have a teensyDSP board that has plenty of memory and high res codec and amp all in one tiny little board.. Made for the synth and pedal diy community..
 
One thing I see: if we may use a graphical library aside the used tools, which allows us to cinfigure projects, we've defined via the Audio GUI (I cut a corner here - the full name is too long) over a runtine GUI, resembling the miniDSP PlugIns. E.g. we wan't to change some biQuad coefficients, this is just a function call, but if we could see at first, which coefficients are active and which filter curve will this define (and what is the expected curve after the change) - ok, this could be a benefit.

i think the suggestion was simply: "couldn't there be an easy, automagic way to re-use / port some of the things (code) out there to the teensy audio API" ; which i don't think there is, presently (a.o. for reasons mentioned above).

developing a more complex graphical UI / patching environment is a different issue and seems like an end in itself; and it would risk duplicating the stuff already out there *: for example http://www.axoloti.com/ (running on STM32f4), http://patchblocks.com/ (running on not sure what, some M3 i think) ; http://www.rebeltech.org/products/owl-modular/ (STM32f4)

the potential appeal of something like "heavy" (the enzienaudio stuff : https://enzienaudio.com/docs/c.html) is that it takes an existing, matured, well established software (pd in that case); it's simply concerned about making the code more portable. (but not that portable)


*
that said, i'd say it would be pretty cool if, say, pjrc teamed up with a company such a macromachines to get out a audio-adapter-like teensy 3++ synth module ...
 
I've never seen those first 2, but I can tell you I've had the Stanford Synthesis Toolkit (STK) on my list of resources to investigate for the audio library. It's unlikely the code will be directly usable, but there's almost certainly a lot of very good knowledge of algorithms buried in there.
 
I've never seen those first 2, but I can tell you I've had the Stanford Synthesis Toolkit (STK) on my list of resources to investigate for the audio library. It's unlikely the code will be directly usable, but there's almost certainly a lot of very good knowledge of algorithms buried in there.

Actually, I think STK might be portable enough to work. STK functions just deal with individual samples, it could potentially be pretty simple.

I am dedicating more of my time to learning how to make/edit libraries and work with deeper C++. I am currently studying the PureData LibPD for developing some sound apps for iOS. I plan to see if its realistic to port to Teensy. This would mean someone could create a puredata patch and load it to run on the teensy, however I do not think it is as optimized for the platform as your current audio library. I will also be looking deeper at the link I posted above https://enzienaudio.com/ and how they utilize this with the OWL / STM to see if that is useful.

The FAUST language I had posted is pretty amazing and I am going to ask them to consider adding teensy/arduino to their list of output options. I watched their workshop on youtube that shows how to work with FAUST and learned the basics. It is in some ways a DSP shorthand, where you can construct your signal flow by using characters like "<" to split the current signal in two and then subsequently write the operations for the first signal followed by a "," and then the second signal. It compiles/exports to so many platforms its silly, native applications for LINUX, OSX, Windows.. VST/AU plugins, iOS/android apps, max/msp and pure Data externals, its crazy. The language is so streamlined that you can write simple algorithms in as little as one line of code and then export to optimized C++ code or compile directly into apps/plugins with GUI
http://faust.grame.fr/about/
 
I've also been learning more about FAUST. It appears to me that the folks at CCRMA (the keepers of the Stanford Synthesis Toolkit) are pretty smitten with it; they've already ported most of the STK to Faust.

Because Faust is a purely descriptive language (akin to SQL in a sense), it's really very much up to the compiler to make things turn out well. But since Faust compiles down to C++, adapting it to the Teensy appears like a straightforward task. And it does appear to generate pretty lean code. The DSP objects it creates could be adapted to Teensy Audio pretty easily, I think. Mostly you'd have to cross-convert between int and float samples. (Or maybe Faust can be convinced to generate code that uses int samples. I'm not sure, but it seems like a thing that would come up.)

There's a ton of really appealing open-source audio processing available in Faust format from CCRMA. In particular, some of the items on Paul's to-do list for teensy: a compressor/limiter, the Karplus-Strong algorithm, a ton of filter options, et cetera. The real question, I think, is how well the code compiles for Teensy.
 
There's a ton of really appealing open-source audio processing available in Faust format from CCRMA. In particular, some of the items on Paul's to-do list for teensy: a compressor/limiter, the Karplus-Strong algorithm, a ton of filter options, et cetera. The real question, I think, is how well the code compiles for Teensy.

Or now that the prop shield has been released, hopefully Paul can turn his attention to releasing the so-called Teensy 3.x++ which has single precision floating point support in hardware. I imagine that this will help people doing innovative things with the audio shield, wanting to use the prop shield motion sensors in real time, doing GPS calculations, or other things that involve floating point.
 
Mostly you'd have to cross-convert between int and float samples. (Or maybe Faust can be convinced to generate code that uses int samples. I'm not sure, but it seems like a thing that would come up.)

I am also very interested in that topic! I know a little bit about FAUST. I also tried to implement generated C++ code from the heavy-C compiler into a Teensy audio object (https://enzienaudio.com/). Unfortunately I wasn't successful converting the int samples arrays, because of my limited c++ skills. I would be very happy if someone is able to do and explain it :)
 
I'm really curious to hear how this works out.

However, I'm a bit skeptical of any automated conversion from float to fixed point. Doing fixed point well usually requires a lot of very careful design, often with internal results kept with 32 or even 64 bits.
 
I think I get what you mean about float->fixed in general ... but in the case of audio samples represented as values between 1 and -1, i suspect that a fixed-point implementation could be more accurate than floating point for the same number of bits.

For instance, I'm looking at this interesting fixed-point library: http://www.codeproject.com/Articles/37636/Fixed-Point-Class . It's polymorphic, and can create a drop-in replacement for floating point types. With it, I could define a 32-bit fixed-point type with 1 sign bit, 4 integer bits and 27 bits to represent the fractional component, where most of the math happens.

(I choose 4 integer bits at first glance to provide a huge margin of safety, assuming +-16 is the largest value that would end up as a result of the math to calculate a sample. I think that's actually overkill in Faust-land. To overflow it, I'd have to sum more than sixteen channels of fixed-point audio into a single channel, with no attenuation beforehand. The way Faust is set up, I think fixed-point overflow could be avoidable even with just 1 integer bit.)

The point is, this new type would get 27 bits to represent the actual interesting part of the sample, whereas an IEEE floating-point number uses only 23 bits for the significant. Plus, the math would all compile to fixed-point in the background, in theory much faster than emulated floating point.

What's neat is that the C++ code generated by Faust operates on the numeric type FAUSTFLOAT, which you are allowed to redefine. So, in theory, I could redefine FAUSTFLOAT to my new fixed-point type, compile and enjoy.

There's clearly some caveats to such an approach, but it seems a simple thing to test. I guess I'll give it a shot at some point unless someone beats me to it ... until then, this is all just talk. =)
 
H3ll yes mykle, I'm gonna look into that. I keep wondering if the Faust route is the best for portable DSP, and I think it might be. I just wish I could program it more like Pd/max in a graphical flow method. Maybe if I make a bunch of modules for the audio lib. But my main concern that I am beginning to think about is it seems the audio lib adds buffers for each object connection, which could add up to some serious latency issues if I'm not mistaken.
 
But my main concern that I am beginning to think about is it seems the audio lib adds buffers for each object connection, which could add up to some serious latency issues if I'm not mistaken.

Like Pd and other environments, connections in the Teensy audio lib can be either "forward" with no delay, or "backwards" with a 1 block (128 sample) delay.

In the common case, like a synthesis object feeds into a mixer which feeds into an effect and so on, each object gets the buffer of data from the prior object and does all its work on that same 128 sample update period. There isn't an extra 128 sample delay added by the connections in this common case.

But "backwards" connections, which technically means to the same or any earlier object (according to the order they appear in the generated code) do cause the 128 samples output by the object to be retained until the next 128 sample window. The objects are updated in the order they are created in your code, so when an object creates data that's fed to an object earlier in the list, that data has to sit in a buffer until the next 128 sample window.

Pd, Max/MSP, Javascript web audio and pretty much every other system has this same constraint. Many of them do a graph analysis to discover an optimal order to execute all your objects. Teensy just executes them in the order you create (it is just a small microcontroller, afterall). Either way, any digital signal processing system has to deal with this scenario somehow, so feedback loops in your audio design don't result in an infinite loop when actually executing the code!

For example, in Pd's documentation, look at 2.4.5:

https://puredata.info/docs/manuals/pd/x2.htm

When you send a signal to a point that is earlier in the sorted list of tilde objects, the signal doesn't get there until the next cycle of DSP computation, one block later; so your signal will be delayed by one block (1.45 msec by default.) Delread~ and delwrite~ have this same restriction, but here the 1.45 msec figure gives the minimum attainable delay.

Except for the lack of sorting the update order, Puredate and Teensy Audio implement DSP data flow pretty much the same way.
 
Status
Not open for further replies.
Back
Top