Open Sound Control (OSC) Teensy Audio Library Implementation

@adrianfreed, are there any examples of receiving and actioning timed bundles on an Xduino-type platform? Could only see examples of sending, and it’s not immediately obvious what the canonical way of doing this is (assuming there is a canonical way!).
 
Also I think we should use the
https://github.com/CNMAT/OSC
as it takes care of the "matching" address to object part
Agree. It's actually already in the Teensy library, though at an older version.
The only downside (or maybe not) with it is that for every message it receives there need to be a new OSCMessage created and then the actual matching is by using that message object
Code:
OSCMessage msg("/a/1");
msg.dispatch("/a/1", dispatchAddress);
this looks to me very wasteful

as it could be defined in a (OSCaddr -> dispatchAddress) array instead
and when a new message arrives it goes throught that array to find the correct OSCaddr and then the corresponding "dispatchAddress" function can be called.
It doesn't look efficient, and may not be, but I think when you delve into timestamped bundles, pattern matching and so on it's probably best to stick with the OSC standard and library, at least to start with.
 
It doesn't look efficient, and may not be, but I think when you delve into timestamped bundles, pattern matching and so on it's probably best to stick with the OSC standard and library, at least to start with.

But a easy way of solving it would be to do a separate class proposed names "OSCDecoder"/"OSCMessages" that inherit from the OSCMessage class and there make use of the array principle, there could then be additional functions to add/remove from that array when using together with dynamic Audio Objects.
Then there could be different instances of that new class,
1. AudioObjects
2. DynamicControl (place, rename, remove, connect, disconnect and control audio objects live)

Tomorrow (+10h) I will do some testing with the OSC library
But first try create a simple C++ function extractor in C# (which is my "native" programming language) to get all functions + parameter datatypes

Also did you know about the Roboremo app?
It's just a great app to create simple GUI:s
to use in a Mobile,
I have even created a unofficial editor in C# (Windows)
so it's much easier to create complex GUI:s
 
OK. I'm making good progress with the OSC library and a function extractor, though... it would be really good if you were to look at sending OSC messages from your GUI, preferably using Web Serial API as everyone with a Teensy will be able to access that.

My approach is to derive from the Audio classes - this is the test code with only a couple of derived classes implemented, but it's working:
Code:
#if !defined(_AUDIOOSCBASE_H_)
#define _AUDIOOSCBASE_H_

#include <OSCMessage.h>
#include <Audio.h>

class AudioOSCbase
{
  public:
    AudioOSCbase(const char* _name)
    {
      if (NULL != _name)
      {
        nameLen = strlen(_name);
        
        Serial.printf("Created %s\n\n",_name);
        
        name = (char*) malloc(nameLen+3); // include space for // and null terminator
        if (NULL != name)
        {
          name[0] = '/'; // for routing
          strcpy(name+1,_name);
          //name[nameLen+1] = '/';
          //name[nameLen+2] = 0;
        }
      }
      linkIn(); 
    }
    ~AudioOSCbase() {if (NULL != name) free(name); linkOut(); }
    virtual void route(OSCMessage& msg, int addressOffset)=0;
    char* name;
    size_t nameLen;
    bool isMine(OSCMessage& msg, int addressOffset) {return msg.match(name,addressOffset) == (int) nameLen+1;}
    bool validParams(OSCMessage& msg,const char* types)
    {
      size_t sl = strlen(types);
      bool result = (size_t) msg.size() == sl;

      for (size_t i=0;i<sl && result;i++)
      {
        char type = msg.getType(i);
        
        result = types[i] == type;
        if (!result && ';' == types[i]) // boolean: encoded directly in type
          result = type == 'T' || type == 'F';
      }
      
      return result;
    }

    bool isTarget(OSCMessage& msg,int addressOffset,const char* pattern,const char* types)
    {
      bool result = msg.fullMatch(pattern,addressOffset+nameLen+1) && validParams(msg,types);

      if (result) Serial.println(name+1);
      
      return result;
    }
    
    void debugPrint(OSCMessage& msg, int addressOffset)
    {
      char prt[50];
      msg.getAddress(prt,addressOffset);

      if (NULL != name)
        Serial.println(name);
      Serial.println(addressOffset);
      Serial.println(prt);
      Serial.println(isMine(msg,addressOffset));
      Serial.println(msg.size());
      Serial.println();      
    }

    static void routeAll(OSCMessage& msg, int addressOffset)
    {
      AudioOSCbase** ppLink = &first_route; 
      while (NULL != *ppLink)
      {
        (*ppLink)->route(msg,addressOffset);
        ppLink = &((*ppLink)->next_route);
      }
    }
    
  private:
    static AudioOSCbase* first_route; //!< linked list to route OSC messages to all derived instances
    AudioOSCbase* next_route;
    void linkIn() {next_route = first_route; first_route = this;}
    void linkOut() 
    {
      AudioOSCbase** ppLink = &first_route; 
      while (NULL != *ppLink && this != *ppLink)
        ppLink = &((*ppLink)->next_route);
      if (NULL != ppLink)
      {
        *ppLink = next_route;
        next_route = NULL;
      }
    }
};


class AudioOSCSynthWaveform : public AudioSynthWaveform, AudioOSCbase
{
  public:
    AudioOSCSynthWaveform(const char* _name) : AudioOSCbase(_name) {}

    void route(OSCMessage& msg, int addressOffset)
    {
      if (isMine(msg,addressOffset))
      {
        //debugPrint(msg,addressOffset+nameLen+1);
        // Can't use msg.route() here because the callback has to be static, and we'd then
        // lose knowledge of the instance.
        //
        // To permit shorter message addresses, we allow shortening of the member function
        // to any point that is still unique
        if (isTarget(msg,addressOffset,"/am*","f")) {amplitude(msg.getFloat(0));} 
        if (isTarget(msg,addressOffset,"/ar*","bf")) {OSCarbitraryWaveform(msg,addressOffset+nameLen+1);} 
        if (isTarget(msg,addressOffset,"/b*","ffi")) {begin(msg.getFloat(0),msg.getFloat(1),msg.getInt(2));}         
        if (isTarget(msg,addressOffset,"/b*","i")) {begin(msg.getInt(0));}         
        if (isTarget(msg,addressOffset,"/f*","f")) {frequency(msg.getFloat(0));} 
        if (isTarget(msg,addressOffset,"/o*","f")) {offset(msg.getFloat(0));} 
        if (isTarget(msg,addressOffset,"/ph*","f")) {phase(msg.getFloat(0));} 
        if (isTarget(msg,addressOffset,"/pu*","f")) {pulseWidth(msg.getFloat(0));} 
      }
    }
  private:
    void OSCarbitraryWaveform(OSCMessage& msg, int addressOffset) {debugPrint(msg,addressOffset);}
};


class AudioOSCMixer4 : public AudioMixer4, AudioOSCbase
{
  public:
    AudioOSCMixer4(const char* _name) : AudioOSCbase(_name) {}

    void route(OSCMessage& msg, int addressOffset)
    {
      if (isMine(msg,addressOffset))
      {
        if (isTarget(msg,addressOffset,"/g*","if")) {gain(msg.getInt(0),msg.getFloat(1));} 
      }
    }
};
#endif // !defined(_AUDIOOSCBASE_H_)
An OSC-capable class derived from class Audio<something> is always AudioOSC<something>. You'll note each instance needs to be given a name for routing purposes: for static instances it'd probably be the same as the variable name, but for dynamic instances there may be no variable name. If an OSC message arrives for the audio engine (I match "/teensy*/audio" in my code), you just pass it in with a call to AudioOSCbase::routeAll(msg,addressOffset) which runs down the linked list checking to see if it's for any valid instance and function. I believe you have to do that, because of the pattern capability.

I haven't yet touched returning values; I believe they should be an OSC message, but what address to use is slightly unclear to me right now. I've also not tested the destructor, or dealt properly with any functions that need strings or arrays passed in, or use of bundles, or timing. The comments are nearly non-existent, and there's debug code everywhere. Lots left to do...
 
Just been thinking - going to change it round so the derived classes all start OSCAudio<something>, so if other libraries spring up using a similar scheme (e.g. OSCMIDI, OSCdisplay...) they'll be found more easily by a human being!
 
they'll be found more easily by a human being!

yes that is true, but when are they needed to be found, the Arduino IDE don't officially support autocomplete
but when using VSCODE or other IDE it's available.

Good implementation that you have done,
like the linked list

but are not the "execution" order backwards (according to how new objects are added)
that also make me believe that the execution order in the Audio Lib is also backwards
(yes it is)
that mean that the export order (from Tool) should be backwards

I can see the logic that "newer" objects should be executed first and the "older" last.

If you follow the signal flow then the data from a "generator" should be executed before going to a mixer
so in when the loop starts for the very first time no source data is available, the other time around every data is available.


Back to what you have done:
I did some thinkin
it could be nice if the OSC implementation could be available in the official Audio library objects
and enabled by a compiler flag when used, therefore when not using OSC the implementation don't take extra memory.
but that way makes it harder to maintain the AudioObject OSC implementation, as every file need to be updated.

Also that mean that we don't need extra new class names to remember,
but when using the tool that is not needed as the tool easly can export objects using the OSCAudio<something> namespace anyway,
just by adding OSC in front of every Audio<something>
 
yes that is true, but when are they needed to be found, the Arduino IDE don't officially support autocomplete
but when using VSCODE or other IDE it's available.

Good implementation that you have done,
like the linked list

but are not the "execution" order backwards (according to how new objects are added)
that also make me believe that the execution order in the Audio Lib is also backwards
(yes it is)
that mean that the export order (from Tool) should be backwards

I can see the logic that "newer" objects should be executed first and the "older" last.

If you follow the signal flow then the data from a "generator" should be executed before going to a mixer
so in when the loop starts for the very first time no source data is available, the other time around every data is available.
Thank you :)

In the static library the AudioStream objects actually link themselves in in definition order (AudioStream.h, about line 136), so the execution order is probably reasonable. In my dynamic library I create the execution order links in patchcord order, as far as possible, since definition order is not necessarily useful.

For OSCAudio message routing I don't think it matters much: it's done in foreground code as we have to poll every object anyway.

Back to what you have done:
I did some thinkin
it could be nice if the OSC implementation could be available in the official Audio library objects
and enabled by a compiler flag when used, therefore when not using OSC the implementation don't take extra memory.
but that way makes it harder to maintain the AudioObject OSC implementation, as every file need to be updated.

Also that mean that we don't need extra new class names to remember,
but when using the tool that is not needed as the tool easly can export objects using the OSCAudio<something> namespace anyway,
just by adding OSC in front of every Audio<something>
That would be great, to do it in the GUI. Maybe an option button to switch export from non-OSC and OSC-capable; or an option to place an object of either type, and/or the ability to switch an already-placed object's type? Maybe show them in different colours? Having a mix will, as you say, improve memory use, and also the message routing efficiency.

Not quite sure if using a compiler flag would be robust. I've done many quick hacks using something like #define AudioSynthWaveform OSCAudioSynthWaveform, and it usually bites me at some point! For now I'd prefer not to touch the Audio or AudioStream libraries, though if Paul decided to adopt OSCAudio then it would be a different matter. Much too early for that, though...
 
@JayShoe, looking back at the original User Requirement Specification in #1, are we wandering a bit off-piste here? Can TouchMIDI use a serial port?
 
by the way
I try to use your lib but cannot get it to work
I use Br@ay:s terminal and send the data in RAW format (a $ means hex format)
Code:
$C0/teensy1/audio/waveform1/f$00$00,f$00$00$43$dc$00$00$C0
$C0/teensy1/audio/waveform1/b$00$00,i$00$00$00$00$00$00$C0
and
this is just the same data as I receive when doing:
Code:
OSCMessage msg2("/teensy1/audio/waveform1/f");
  msg2.add(440.0);
  HWSERIAL.beginPacket();
  msg2.send(HWSERIAL);
  HWSERIAL.endPacket();
  msg2.empty();
  HWSERIAL.println();
  OSCMessage msg3("/teensy1/audio/waveform1/b");
  msg3.add(0);
  HWSERIAL.beginPacket();
  msg3.send(HWSERIAL);
  HWSERIAL.endPacket();
  msg3.empty();

and I have added debug code to the end of isTarget
so that I can see when a target is not matched
Code:
void route(OSCMessage& msg, int addressOffset)
{
  if (isMine(msg,addressOffset))
  {
	if (isTarget(msg,addressOffset,"/am*","f")) {amplitude(msg.getFloat(0));} 
	else if (isTarget(msg,addressOffset,"/ar*","bf")) {OSCarbitraryWaveform(msg,addressOffset+nameLen+1);} 
	else if (isTarget(msg,addressOffset,"/b*","ffi")) {begin(msg.getFloat(0),msg.getFloat(1),msg.getInt(2));}         
	else if (isTarget(msg,addressOffset,"/b*","i")) {begin(msg.getInt(0));}         
	else if (isTarget(msg,addressOffset,"/f*","f")) {frequency(msg.getFloat(0));} 
	else if (isTarget(msg,addressOffset,"/o*","f")) {offset(msg.getFloat(0));} 
	else if (isTarget(msg,addressOffset,"/ph*","f")) {phase(msg.getFloat(0));} 
	else if (isTarget(msg,addressOffset,"/pu*","f")) {pulseWidth(msg.getFloat(0));} 
	else {
		Serial.println("Cannot find target");
	}
  }
  else {
	  Serial.print("is not mine @");
	  Serial.print(name);
  }
}

I just try to understand how the protocol works,
to make it easier to debug when trying to implement it into the Tool

the problem is that it always goin to the "Cannot find target" part.
 
I've just pushed an update, which has a lot of the audio objects mapped, though still send-only.

I think there's a bug in the OSC pattern matcher, so although you'd think /f* would match /f, it actually doesn't! However, it does match /fr, so you could try that. I've raised an issue in github, no idea if it's closely monitored though.

If you don't mind installing Python 3, then /dev/OSCAudioSend.py should prove it's all working, though you then need to have the v4 audio shield and a USB/serial converter wired to Serial7 - so, not so simple... but I think you're close, I assume you're not always seeing the "is not mine@" message.
 
Yes it works now by using whatever char after the /f /b and so on
and it don't matter anyway as there is a lot of padding zeroes to fill the message.

Now I can finally begin the work on the GUI
and use the osc.js lib
https://github.com/colinbdclark/osc.js/

And thanks for the additional Audio Objects.
 
@JayShoe, looking back at the original User Requirement Specification in #1, are we wandering a bit off-piste here? Can TouchMIDI use a serial port?

I'm happy to have struck a chord and appreciate what the two of you are doing. It's an interesting implementation - so users who want to add the OSC function to an object would prepend OSC to the item. So OSCAudioMixer, OSCAudioSine, etc. The library will then create the OSC controls, and also create the Audio Library object. Then calling the OSC objects will control the Audio object. Cool! Makes sense! It also allows users to enable OSC control per object. If they don't want the OSC control for that object then they don't have to use it.

https://github.com/h4yn0nnym0u5e/OSCAudio/blob/trunk/examples/OSCAudioTesting/OSCAudioTesting.ino
Code:
// GUItool: begin automatically generated code
//AudioSynthWaveform       waveform1;      //xy=654,472
OSCAudioSynthWaveform   waveform1("waveform1");
OSCAudioMixer4          mixer1("mixer1");
AudioOutputI2S           i2s1;           //xy=977,476
AudioConnection          patchCord1(waveform1, 0, i2s1, 0);
AudioConnection          patchCord2(waveform1, 0, mixer1, 1);
AudioConnection          patchCord3(mixer1, 0, i2s1, 1);
OSCAudioControlSGTL5000     sgtl5000_1("sgtl5000");     //xy=977,519
// GUItool: end automatically generated code

it could be nice if the OSC implementation could be available in the official Audio library objects
and enabled by a compiler flag when used, therefore when not using OSC the implementation don't take extra memory.

While it is interesting to consider adding this to the Audio Library project directly, the reason that a separate library is helpful may be to avoid the backlog of pull requests. From what I understand there are a few, and not enough time. A separate library means that the maintainers are in control of it. As someone who sometimes struggles to bypass core libraries from time to time; having it in it's own should make it easier for someone like me. That is, add the library to the library folder of arduino or the "lib" folder of platform.io and the library will become available. As opposed to forking the entire audio library. But it's not my project, so carry on!

One "requirement" (suggestion!?) that I would like to see is to allow multiple clients to sync. I'm not sure how the specification allows for this, but I can envision multiple clients such as this. It would be nice if a setting was changed on the laptop, the phones would show the update in real-time (hopefully).

Teensy OSC Control.jpg

One nice-to-have would be if it had a broadcast function to make finding the device easier. Again, the spec didn't mention anything like this so it may be off-piste. A call to list all available OSC addresses would also be nice, so that after the device is created and found, one can call it and receive a list of all the available functions. /teensy*/audio/list-all?

Finally, I wonder how/if this could also work with the 32 bit library.
https://github.com/chipaudette/OpenAudio_ArduinoLibrary
 
(You ain't seen me, right? At work...)

This all looks doable, and as if we're on the right lines, which is nice. Multiple clients is mostly down to the application writer, but worth bearing in mind. In particular, knowing where to send responses is an interesting conundrum: I have a few vague ideas but nothing concrete as yet. If I do anything soon it's likely to be a bit experimental and liable to change.

Good ideas about finding and listing. I'm currently inclined towards keeping the /<device>/audio path purely aimed at audio objects now in existence, with another path for object creation and listing. This is actually at the discretion of the application writer, with the caveat that it'll have to match any scheme implemented by the GUI (assuming that's in use). Then again, maybe the GUI could have options in that regard...

I couldn't see anything in OSC about "pinging" devices: that could be useful for a networked array of Teensys (would that be a TeenSys?), to discover what was available.
 
I have now implemented the OSC control + Web Serial API using SLIP encoding
in my Design Tool

https://manicken.github.io/

here is a example (don't forget to select the last } )
Code:
{"version":1,"settings":{"arduino":{"WriteJSONtoExportedFile":false,"ProjectName":"OSC demo","Board":{"Platform":"","Board":"","Options":""}},"BiDirDataWebSocketBridge":{"MidiDeviceOut":2},"workspaces":{},"sidebar":{},"palette":{"categoryHeaderTextSize":16,"categoryHeaderHeight":20,"onlyShowOne":false},"editor":{"aceEditorTheme":"chrome"},"devTest":{},"IndexedDBfiles":{"testFileNames":"testFile.txt"},"NodeDefGenerator":{},"NodeDefManager":{},"NodeHelpManager":{}},"workspaces":[{"type":"tab","id":"3629fcd9.ccc604","label":"Main","inputs":0,"outputs":0,"export":false,"isMain":false,"mainNameType":"tabName","mainNameExt":".ino","generateCppDestructor":false,"extraClassDeclarations":"","settings":{"showNodeToolTip":false,"workspaceBgColor":"#EDFFDF","scaleFactor":0.8,"showGridHminor":false,"showGridHmajor":false,"showGridVminor":false,"showGridVmajor":false,"gridHminorSize":20,"gridHmajorSize":200,"gridVminorSize":20,"gridVmajorSize":130,"gridMinorColor":"#DDDDDD","gridMajorColor":"#DDDDDD","snapToGridHsize":10,"snapToGridVsize":10,"nodeDefaultTextSize":15,"useCenterBasedPositions":false},"nodes":[{"id":"Sheet_1_Slider2","type":"UI_Slider","name":"amplitude","comment":"","w":30,"h":280,"textSize":16,"midiCh":"","midiId":"29","orientation":"v","label":"d.val/d.maxVal","minVal":0,"maxVal":100,"val":0,"outputFloat":false,"minValF":-1,"maxValF":1,"floatVal":0,"decimalCount":-1,"steps":201,"sendSpace":true,"repeat":false,"repeatPeriod":0,"sendMode":"m","autoReturn":false,"returnValue":"mid","barFGcolor":"#F87A00","sendFormat":"\"midisend(0xB0,\"+d.midiId+\",\" + d.val + \");\"","sendCommand":"var fVal = d.val/d.maxVal;\nvar addr = \"/teensy1/audio/waveform1/am*\";\nvar data = OSC.GetSimpleOSCdata(addr,\"f\", fVal);\nOSC.SendAsSlipToSerial(data);","x":290,"y":50,"z":"3629fcd9.ccc604","bgColor":"#808080","wires":[]},{"id":"Sheet_1_ListBox1","type":"UI_ListBox","name":"waveform","comment":"","w":119,"h":301,"textSize":20,"midiCh":"","midiId":"20","itemTextSize":"","items":"Sine\nSawtooth\nSquare\nTriangle\nPulse\nSaw. Rev.\nSample H\nVar. Tri.","selectedIndex":0,"selectedIndexOffset":"","headerHeight":40,"itemBGcolor":"#FFFFFF","sendCommand":"var addr = \"/teensy1/audio/waveform1/b*\";\nvar data = OSC.GetSimpleOSCdata(addr,\"i\", d.selectedIndex);\nOSC.SendAsSlipToSerial(data);\n","x":130,"y":50,"z":"3629fcd9.ccc604","bgColor":"#F87A00","wires":[]},{"id":"Sheet_1_Slider3","type":"UI_Slider","name":"frequency","comment":"","w":30,"h":691,"textSize":16,"midiCh":"","midiId":"29","orientation":"v","label":"d.val + \"Hz\"","minVal":1,"maxVal":4186,"val":150,"outputFloat":false,"minValF":-1,"maxValF":1,"floatVal":0,"decimalCount":-1,"steps":201,"sendSpace":true,"repeat":false,"repeatPeriod":0,"sendMode":"m","autoReturn":false,"returnValue":"mid","barFGcolor":"#F87A00","sendFormat":"\"midisend(0xB0,\"+d.midiId+\",\" + d.val + \");\"","sendCommand":"var addr = \"/teensy1/audio/waveform1/f*\";\nvar data = OSC.GetSimpleOSCdata(addr,\"f\", d.val);\nOSC.SendAsSlipToSerial(data);","x":380,"y":50,"z":"3629fcd9.ccc604","bgColor":"#808080","wires":[]},{"id":"Sheet_1_Button1","type":"UI_Button","name":"init waveform","comment":"","w":120,"h":34,"textSize":16,"midiCh":"0","midiId":"0","pressAction":"","repeatPressAction":false,"releaseAction":"","repeatReleaseAction":false,"local":"true","sendCommand":"// example of a multi parameter message\r\n// this is for the begin(level, frequency, waveform) function\r\nvar addr = \"/teensy1/audio/waveform1/b*\";\r\nvar data = osc.writePacket( {\r\n        address:addr,\r\n        args:[\r\n            {\r\n                type:\"f\",\r\n                value:1.0\r\n            },\r\n            {\r\n                type:\"f\",\r\n                value:220\r\n            },\r\n            {\r\n                type:\"i\",\r\n                value:1\r\n            }\r\n        ]});\r\nOSC.SendAsSlipToSerial(data);","x":450,"y":70,"z":"3629fcd9.ccc604","bgColor":"#F6F8BC","wires":[]}]}],"nodeAddons":{}
}

just import it
ImportJSON.png

To make the UI very flexible (for different data protocols/interfaces) I have made every "UI object"-event scriptable.

here is some examples (taken from the example code above)
frequency slider:
Code:
var addr = "/teensy1/audio/waveform1/f*";
var data = OSC.GetSimpleOSCdata(addr,"f", d.val);
OSC.SendAsSlipToSerial(data);

init waveform button:
Code:
// example of a multi parameter message
// this is for the begin(level, frequency, waveform) function
var addr = "/teensy1/audio/waveform1/b*";
var data = osc.writePacket( {
        address:addr,
        args:[
            {
                type:"f",
                value:1.0
            },
            {
                type:"f",
                value:220
            },
            {
                type:"i",
                value:1
            }
        ]});
OSC.SendAsSlipToSerial(data);

To connect/disconnect to the serial port there are
two new buttons at the "title"-bar

note.
the disconnect function don't work as expected
that mean a reconnect can only be done after a browser refresh

I have been using a FTDI USB to serial chip to send data to the teensy port 7
as in the example code from h4yn0nnym0u5e
 
I have now updated the function
GetSimpleOSCdata
to use Rest parameters
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/rest_parameters

so it can used like this

Code:
var data = OSC.GetSimpleOSCdata(addr, "ffi", 1.0, 220.0, 1);
OSC.SendAsSlipToSerial(data);

note.
make sure to match the length of "ffi" which defines the types for the values at the end


For more complex messages we still need to use the json format

here is one example using a timetag and multiple packets
Code:
var data = osc.writePacket({
        // Tags this bundle with a timestamp that is 60 seconds from now.
        // Note that the message will be sent immediately;
        // the receiver should use the time tag to determine
        // when to act upon the received message.
        timeTag: osc.timeTag(60),

        packets: [
            {
                address: "/carrier/frequency",
                args: [
                    {
                        type: "f",
                        value: 440
                    }
                ]
            },
            {
                address: "/carrier/amplitude",
                args: [
                    {
                        type: "f",
                        value: 0.5
                    }
                ]
            }
        ]
    });

more examples of the packet format @
https://github.com/colinbdclark/osc.js/
 
I'm still looking at the examples and will report back when I can get a test environment up. I'm getting errors at the moment when I run OSCAudioTesting.

Code:
Arduino: 1.8.16 (Windows 10), TD: 1.55, Board: "Teensy 4.1, Serial, 600 MHz, Faster, US English"

In file included from E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioBase.h:101:0,

                 from E:\OneDrive\Documents\Arduino\libraries\OSCAudio\examples\OSCAudioTesting\OSCAudioTesting.ino:17:

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h: In member function 'virtual void OSCAudioAmplifier::route(OSCMessage&, int)':

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:33:79: error: 'slew' was not declared in this scope

             if (isTarget(msg,addressOffset,"/s*",";")) {slew(msg.getBoolean(0));} // void slew(bool doSlew)

                                                                               ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h: At global scope:

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:39:54: error: expected class-name before ',' token

 class OSCAudioAnalyzeEvent : public AudioAnalyzeEvent, OSCAudioBase

                                                      ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h: In member function 'virtual void OSCAudioAnalyzeEvent::route(OSCMessage&, int)':

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:48:70: error: 'getCount' was not declared in this scope

             if (isTarget(msg,addressOffset,"/getC*",NULL)) {getCount();} // uint32_t getCount(void) {return count;}

                                                                      ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:49:71: error: 'getMicros' was not declared in this scope

             if (isTarget(msg,addressOffset,"/getM*",NULL)) {getMicros();} // uint32_t getMicros(void) {return tstamp;}

                                                                       ^

In file included from E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioBase.h:101:0,

                 from E:\OneDrive\Documents\Arduino\libraries\OSCAudio\examples\OSCAudioTesting\OSCAudioTesting.ino:17:

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h: At global scope:

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:475:64: error: expected class-name before ',' token

 class OSCAudioEffectExpEnvelope : public AudioEffectExpEnvelope, OSCAudioBase

                                                                ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h: In member function 'virtual void OSCAudioEffectExpEnvelope::route(OSCMessage&, int)':

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:484:96: error: 'attack' was not declared in this scope

             if (isTarget(msg,addressOffset,"/a*","ff")) {attack(msg.getFloat(0),msg.getFloat(1));} // void attack(float milliseconds, float target_factor = TF)

                                                                                                ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:485:64: error: 'close' was not declared in this scope

             if (isTarget(msg,addressOffset,"/c*",NULL)) {close();} // void close(){

                                                                ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:486:97: error: 'decay' was not declared in this scope

             if (isTarget(msg,addressOffset,"/dec*","ff")) {decay(msg.getFloat(0),msg.getFloat(1));} // void decay(float milliseconds, float target_factor = TF)

                                                                                                 ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:486:97: note: suggested alternative:

In file included from C:\Program Files (x86)\Arduino\hardware\teensy\avr\cores\teensy4/wiring.h:45:0,

                 from C:\Program Files (x86)\Arduino\hardware\teensy\avr\cores\teensy4/WProgram.h:45,

                 from C:\Users\jaysh\AppData\Local\Temp\arduino_build_378661\pch\Arduino.h:6:

c:\program files (x86)\arduino\hardware\tools\arm\arm-none-eabi\include\c++\5.4.1\type_traits:2064:11: note:   'std::decay'

     class decay 

           ^

In file included from E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioBase.h:101:0,

                 from E:\OneDrive\Documents\Arduino\libraries\OSCAudio\examples\OSCAudioTesting\OSCAudioTesting.ino:17:

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:488:69: error: 'getGain' was not declared in this scope

             if (isTarget(msg,addressOffset,"/getG*",NULL)) {getGain();} // float getGain() {return HIRES_TO_FLOAT(mult_hires);}

                                                                     ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:489:70: error: 'getState' was not declared in this scope

             if (isTarget(msg,addressOffset,"/getS*",NULL)) {getState();} // uint8_t getState();

                                                                      ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:490:77: error: 'hold' was not declared in this scope

             if (isTarget(msg,addressOffset,"/h*","f")) {hold(msg.getFloat(0));} // void hold(float milliseconds)

                                                                             ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:491:69: error: 'isActive' was not declared in this scope

             if (isTarget(msg,addressOffset,"/isA*",NULL)) {isActive();} // bool isActive();

                                                                     ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:492:70: error: 'isSustain' was not declared in this scope

             if (isTarget(msg,addressOffset,"/isS*",NULL)) {isSustain();} // bool isSustain();

                                                                      ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:493:71: error: 'noteOff' was not declared in this scope

             if (isTarget(msg,addressOffset,"/noteOf*",NULL)) {noteOff();} // void noteOff();

                                                                       ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:494:69: error: 'noteOn' was not declared in this scope

             if (isTarget(msg,addressOffset,"/noteOn",NULL)) {noteOn();} // void noteOn();

                                                                     ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:495:97: error: 'release' was not declared in this scope

             if (isTarget(msg,addressOffset,"/r*","ff")) {release(msg.getFloat(0),msg.getFloat(1));} // void release(float milliseconds, float target_factor = TF)

                                                                                                 ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:496:80: error: 'sustain' was not declared in this scope

             if (isTarget(msg,addressOffset,"/s*","f")) {sustain(msg.getFloat(0));} // void sustain(float level)

                                                                                ^

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h: In member function 'virtual void OSCAudioPlayQueue::route(OSCMessage&, int)':

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen.h:1199:85: error: 'setMaxBuffers' was not declared in this scope

             if (isTarget(msg,addressOffset,"/se*","i")) {setMaxBuffers(msg.getInt(0));} // void setMaxBuffers(uint8_t);

                                                                                     ^

Multiple libraries were found for "SD.h"

 Used: C:\Program Files (x86)\Arduino\hardware\teensy\avr\libraries\SD

 Not used: C:\Program Files (x86)\Arduino\libraries\SD

Error compiling for board Teensy 4.1.



This report would have more information with
"Show verbose output during compilation"
option enabled in File -> Preferences.


I checked out the GUI tool and successfully imported the GUI. I'm not entirely sure what the Arduino code would be to run the example. Or if there is an export to IDE step that I'm missing?
 
I have some information about ways others have resolved the synchronization question that I was asking about earlier. This document describes an OSC implementation of the Behringer X32 mixer. It goes into the details of their custom implementation which is above and beyond what is found on the OSC Spec V1 from 2002. There is a lot of information on the system setup, sample calls, and example code, and ways to manage things like meters, addresses, request formatting, etc. It's an interesting read on OSC, it's capabilities, and is very relevant to this project.
https://wiki.munichmakerlab.de/images/1/17/UNOFFICIAL_X32_OSC_REMOTE_PROTOCOL_(1).pdf
View attachment UNOFFICIAL_X32_OSC_REMOTE_PROTOCOL_(2).pdf

The X32 runs as both a server and a client. The document describes the following which allows all devices to sync over UDP.

  • Client initiated messages (eg. usb serial to teensy)
  • Multiple client management (eg. TCP/IP to teensy)
  • Server replies or server initiated messages (eg. teensy to subscribers)

Patrick Maillot is also a great resource. He's an avid X32 fan who writes apps to control the mixer via OSC. These examples might also be interesting to understand some ways OSC is being used.
https://sites.google.com/site/patrickmaillot/x32
https://github.com/pmaillot/X32-Behringer
 
Last edited:
I have been using a FTDI USB to serial chip to send data to the teensy port 7
as in the example code from h4yn0nnym0u5e

Couldn't it work with Teensy as a USB Serial + MIDI + Audio device?
 
Couldn't it work with Teensy as a USB Serial + MIDI + Audio device?
Absolutely - that's the way I'd expect to use it in the "normal" case. But it's easier to use Serial for debug, and Serial7 (or similar) for the test messages.

I'm still looking at the examples and will report back when I can get a test environment up. I'm getting errors at the moment when I run OSCAudioTesting.

I checked out the GUI tool and successfully imported the GUI. I'm not entirely sure what the Arduino code would be to run the example. Or if there is an export to IDE step that I'm missing?
Whoops, apologies for that. I still hade the dynamic audio libraries switched in, which have a couple of extra objects and one extra member function. I've pushed an update which fixes that, I believe. It also shows one way to deal with other "variant" audio libraries, like the 32-bit one.

I'll take a look at the GUI sometime today - would quite like to get a skeleton for the object creation / destruction working.
 
Couldn't it work with Teensy as a USB Serial + MIDI + Audio device?

Another downside by using the USB Serial is that you block the "automatic programming"-feature
so for every reprogram, you need to press the manual program button
of at least two reasons I don't really like
* ESD possibilities
* feel awkward/clumsy

We could also use MIDI sysex (max length 260) to send/receive OSC messages?
but then the MIDI port is blocked to use by other applications.

There is the "USB dual serial" option but then there is not any MIDI/AUDIO

we really need some extra options
USB "dual serial"/MIDI/Audio
alternative
USB "dual serial"/MIDI


Maybe in real applications we are gonna use a hardware serial port anyway
to connect through a ESP8266/ESP32/"external control interface"-teensy

The best thing about a ESP module is that the Teensy synth can be completely wireless
but also be updated direct from the "Design Tool" using WebSocket:s
as the ESP can easily be used as a WebSocket-server
 
Whoops, apologies for that. I still hade the dynamic audio libraries switched in, which have a couple of extra objects and one extra member function. I've pushed an update which fixes that, I believe. It also shows one way to deal with other "variant" audio libraries, like the 32-bit one.

I've lost many of the errors but still one "error: 'setMaxBuffers' was not declared in this scope"

Code:
Arduino: 1.8.16 (Windows 10), TD: 1.55, Board: "Teensy 4.1, Serial + MIDI + Audio, 600 MHz, Faster, US English"

In file included from E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioBase.h:105:0,

                 from E:\OneDrive\Documents\Arduino\libraries\OSCAudio\examples\OSCAudioTesting\OSCAudioTesting.ino:17:

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen-static.h: In member function 'virtual void OSCAudioPlayQueue::route(OSCMessage&, int)':

E:\OneDrive\Documents\Arduino\libraries\OSCAudio/OSCAudioAutogen-static.h:1199:90: error: 'setMaxBuffers' was not declared in this scope

             else if (isTarget(msg,addressOffset,"/se*","i")) {setMaxBuffers(msg.getInt(0));} // void setMaxBuffers(uint8_t);

                                                                                          ^

Multiple libraries were found for "SD.h"

 Used: C:\Program Files (x86)\Arduino\hardware\teensy\avr\libraries\SD

 Not used: C:\Program Files (x86)\Arduino\libraries\SD

Error compiling for board Teensy 4.1.



This report would have more information with
"Show verbose output during compilation"
option enabled in File -> Preferences.
 
I have some information about ways others have resolved the synchronization question that I was asking about earlier. This document describes an OSC implementation of the Behringer X32 mixer. It goes into the details of their custom implementation which is above and beyond what is found on the OSC Spec V1 from 2002. There is a lot of information on the system setup, sample calls, and example code, and ways to manage things like meters, addresses, request formatting, etc. It's an interesting read on OSC, it's capabilities, and is very relevant to this project.
https://wiki.munichmakerlab.de/images/1/17/UNOFFICIAL_X32_OSC_REMOTE_PROTOCOL_(1).pdf
View attachment 26811

The X32 runs as both a server and a client. The document describes the following which allows all devices to sync over UDP.

  • Client initiated messages (eg. usb serial to teensy)
  • Multiple client management (eg. TCP/IP to teensy)
  • Server replies or server initiated messages (eg. teensy to subscribers)

Patrick Maillot is also a great resource. He's an avid X32 fan who writes apps to control the mixer via OSC. These examples might also be interesting to understand some ways OSC is being used.
https://sites.google.com/site/patrickmaillot/x32
https://github.com/pmaillot/X32-Behringer

TLDR;

the current value is echoed back by server

The X32 responds to the client after all calls. The client calls the X32 to change a volume (for example) and then the X32 replies back with a confirmation of the parameter.

In addition, clients can "subscribe" to /xremote to receive all commands received by the X32 (an echo). This would be helpful to synchronize all settings with separate clients. Alternatively clients can subscribe to a sub-set of data (specific volume control, meter, etc) to minimize network traffic. A subscription lasts for 10 seconds, so a client monitoring a meter must re-subscribe to the data every 10 seconds to renew.

The /meters OSC command is used for obtaining Meter data, or to get a specific set of meter values. Update cycle frequency for meter data is 50 ms, and may be variable according to console’s ability to fulfill requests.
Timeout is 10 seconds.
Meter values are returned as floats in the range 0.0 – 1.0, representing the linear audio level (digital 0 – full-scale; internal headroom allows for values up to 8.0 (+18 dBfs)). The data returned by the X32/M32 server for /meters is an OSC-blob, an abitrary set of binary data. As a result, the format differs from what is typically returned by the X32/M32. This is essentially for efficiency/performance reasons.

As for receiving real-time meter information, the X32 sends meter data every 50 ms and it is sent to clients who have subscribed to one or many meter requests. Meters are sent for 10 seconds, unless renewed or unsubscribed. This concept may be helpful for any of the analyze functions in the audio library. (IE peak, rms, fft256, fft1024, tone, notefreq, print).

Most of the other information in the document is X32 specific, but some of the information on the data formatting may still be useful. There is a lot of information about how the OSC commands are formatted.

================

I would love to see this project include an "echo" and "subscribe" function. Maybe in light of what has already been done, maybe all functions can create an echo and subscription option. Users can subscribe to.

  • /teensy1/mixer1/subscribe (subscribe to all controls on mixer1)
  • /teensy1/mixer1/ch1/subscribe (subscribe to channel 1)
  • /teensy1/*/subscribe (subscribe to everything on teensy1)
  • /teensy1/renew (renews all existing subscriptions)

I guess the question is HOW to reply to the client. In the X32 it's UDP and so IP addresses on the local network are used. Apparently it replies to the sender's address. On the current Teensy implementation I guess we would need to specify a serial port to reply on?


really need [...] USB "dual serial"/MIDI/Audio

For the simple use-case users will likely be creating a system in paul/manicksan's GUI. The simplest way to allow this is definitely USB Serial - Audio. If this prevents the programming of the teensy until the program button is pressed then the dual serial + midi + audio format would be better.

Maybe in real applications we are gonna use a hardware serial port anyway
to connect through a ESP8266/ESP32/"external control interface"-teensy

It's still best to have the dual serial + Midi + audio board type working because this will likely be the default way to access the system for the average user. However, I'm keen on creating an open source ESP32 Teensy board. It's another project that I've been working on.
https://forum.pjrc.com/threads/68465-Teensy-4-1-ESP32-C3-MINI-Module-I2C-I2S-SPI-UART

I originally designed it with the C3-mini but that doesn't have A2DP and I want to stream Bluetooth audio to the system (as well as control data) so I went back to the original ESP32. Currently we have another thread regarding A2DP working, but I'm working on learning how to program the ESP32 via the Teensy so that I can confirm some wires before going back to finishing up the custom board. I plan to make this board public if it ever gets completed. I think the combination of the ESP32, Teensy, and Audio Shield would make a really nice little audio device for synth and music people.

The ESP32-Teensy module should stack with the 4.1 and the audio shield. It should allow programming of the ESP32 from the Teensy, it should allow I2S streaming (two ways, one at a time?), and it should allow a few different ways to communicate with the Teensy (I2C, SPI, Serial?). That schematic linked above shows my initial thoughts - I have an updated one with the ESP32. I have some board layouts I've worked on too.

That being said, I wouldn't rely solely on the ESP32. The Ethernet port on the T4.1 may be a more reliable way to control it when on stage. USB is also reliable. Bluetooth/WIFI can get a little unreliable when there are 100's of people in a room with 5G connections emitting from their pockets - and "water bags" (humans) absorbing the signals.
 
Last edited:
Back
Top