Forum Rule: Always post complete source code & details to reproduce any issue!
Page 2 of 2 FirstFirst 1 2
Results 26 to 36 of 36

Thread: Open Sound Control (OSC) Teensy Audio Library Implementation

  1. #26
    Senior Member
    Join Date
    Apr 2021
    Location
    Cambridgeshire, UK
    Posts
    154
    @adrianfreed, are there any examples of receiving and actioning timed bundles on an Xduino-type platform? Could only see examples of sending, and it’s not immediately obvious what the canonical way of doing this is (assuming there is a canonical way!).

  2. #27
    Senior Member
    Join Date
    Apr 2021
    Location
    Cambridgeshire, UK
    Posts
    154
    Quote Originally Posted by manicksan View Post
    Also I think we should use the
    https://github.com/CNMAT/OSC
    as it takes care of the "matching" address to object part
    Agree. It's actually already in the Teensy library, though at an older version.

    The only downside (or maybe not) with it is that for every message it receives there need to be a new OSCMessage created and then the actual matching is by using that message object
    Code:
    OSCMessage msg("/a/1");
    msg.dispatch("/a/1", dispatchAddress);
    this looks to me very wasteful

    as it could be defined in a (OSCaddr -> dispatchAddress) array instead
    and when a new message arrives it goes throught that array to find the correct OSCaddr and then the corresponding "dispatchAddress" function can be called.
    It doesn't look efficient, and may not be, but I think when you delve into timestamped bundles, pattern matching and so on it's probably best to stick with the OSC standard and library, at least to start with.

  3. #28
    Senior Member manicksan's Avatar
    Join Date
    Jun 2020
    Location
    Sweden
    Posts
    327
    Quote Originally Posted by h4yn0nnym0u5e View Post
    It doesn't look efficient, and may not be, but I think when you delve into timestamped bundles, pattern matching and so on it's probably best to stick with the OSC standard and library, at least to start with.
    But a easy way of solving it would be to do a separate class proposed names "OSCDecoder"/"OSCMessages" that inherit from the OSCMessage class and there make use of the array principle, there could then be additional functions to add/remove from that array when using together with dynamic Audio Objects.
    Then there could be different instances of that new class,
    1. AudioObjects
    2. DynamicControl (place, rename, remove, connect, disconnect and control audio objects live)

    Tomorrow (+10h) I will do some testing with the OSC library
    But first try create a simple C++ function extractor in C# (which is my "native" programming language) to get all functions + parameter datatypes

    Also did you know about the Roboremo app?
    It's just a great app to create simple GUI:s
    to use in a Mobile,
    I have even created a unofficial editor in C# (Windows)
    so it's much easier to create complex GUI:s

  4. #29
    Senior Member
    Join Date
    Apr 2021
    Location
    Cambridgeshire, UK
    Posts
    154
    OK. I'm making good progress with the OSC library and a function extractor, though... it would be really good if you were to look at sending OSC messages from your GUI, preferably using Web Serial API as everyone with a Teensy will be able to access that.

    My approach is to derive from the Audio classes - this is the test code with only a couple of derived classes implemented, but it's working:
    Code:
    #if !defined(_AUDIOOSCBASE_H_)
    #define _AUDIOOSCBASE_H_
    
    #include <OSCMessage.h>
    #include <Audio.h>
    
    class AudioOSCbase
    {
      public:
        AudioOSCbase(const char* _name)
        {
          if (NULL != _name)
          {
            nameLen = strlen(_name);
            
            Serial.printf("Created %s\n\n",_name);
            
            name = (char*) malloc(nameLen+3); // include space for // and null terminator
            if (NULL != name)
            {
              name[0] = '/'; // for routing
              strcpy(name+1,_name);
              //name[nameLen+1] = '/';
              //name[nameLen+2] = 0;
            }
          }
          linkIn(); 
        }
        ~AudioOSCbase() {if (NULL != name) free(name); linkOut(); }
        virtual void route(OSCMessage& msg, int addressOffset)=0;
        char* name;
        size_t nameLen;
        bool isMine(OSCMessage& msg, int addressOffset) {return msg.match(name,addressOffset) == (int) nameLen+1;}
        bool validParams(OSCMessage& msg,const char* types)
        {
          size_t sl = strlen(types);
          bool result = (size_t) msg.size() == sl;
    
          for (size_t i=0;i<sl && result;i++)
          {
            char type = msg.getType(i);
            
            result = types[i] == type;
            if (!result && ';' == types[i]) // boolean: encoded directly in type
              result = type == 'T' || type == 'F';
          }
          
          return result;
        }
    
        bool isTarget(OSCMessage& msg,int addressOffset,const char* pattern,const char* types)
        {
          bool result = msg.fullMatch(pattern,addressOffset+nameLen+1) && validParams(msg,types);
    
          if (result) Serial.println(name+1);
          
          return result;
        }
        
        void debugPrint(OSCMessage& msg, int addressOffset)
        {
          char prt[50];
          msg.getAddress(prt,addressOffset);
    
          if (NULL != name)
            Serial.println(name);
          Serial.println(addressOffset);
          Serial.println(prt);
          Serial.println(isMine(msg,addressOffset));
          Serial.println(msg.size());
          Serial.println();      
        }
    
        static void routeAll(OSCMessage& msg, int addressOffset)
        {
          AudioOSCbase** ppLink = &first_route; 
          while (NULL != *ppLink)
          {
            (*ppLink)->route(msg,addressOffset);
            ppLink = &((*ppLink)->next_route);
          }
        }
        
      private:
        static AudioOSCbase* first_route; //!< linked list to route OSC messages to all derived instances
        AudioOSCbase* next_route;
        void linkIn() {next_route = first_route; first_route = this;}
        void linkOut() 
        {
          AudioOSCbase** ppLink = &first_route; 
          while (NULL != *ppLink && this != *ppLink)
            ppLink = &((*ppLink)->next_route);
          if (NULL != ppLink)
          {
            *ppLink = next_route;
            next_route = NULL;
          }
        }
    };
    
    
    class AudioOSCSynthWaveform : public AudioSynthWaveform, AudioOSCbase
    {
      public:
        AudioOSCSynthWaveform(const char* _name) : AudioOSCbase(_name) {}
    
        void route(OSCMessage& msg, int addressOffset)
        {
          if (isMine(msg,addressOffset))
          {
            //debugPrint(msg,addressOffset+nameLen+1);
            // Can't use msg.route() here because the callback has to be static, and we'd then
            // lose knowledge of the instance.
            //
            // To permit shorter message addresses, we allow shortening of the member function
            // to any point that is still unique
            if (isTarget(msg,addressOffset,"/am*","f")) {amplitude(msg.getFloat(0));} 
            if (isTarget(msg,addressOffset,"/ar*","bf")) {OSCarbitraryWaveform(msg,addressOffset+nameLen+1);} 
            if (isTarget(msg,addressOffset,"/b*","ffi")) {begin(msg.getFloat(0),msg.getFloat(1),msg.getInt(2));}         
            if (isTarget(msg,addressOffset,"/b*","i")) {begin(msg.getInt(0));}         
            if (isTarget(msg,addressOffset,"/f*","f")) {frequency(msg.getFloat(0));} 
            if (isTarget(msg,addressOffset,"/o*","f")) {offset(msg.getFloat(0));} 
            if (isTarget(msg,addressOffset,"/ph*","f")) {phase(msg.getFloat(0));} 
            if (isTarget(msg,addressOffset,"/pu*","f")) {pulseWidth(msg.getFloat(0));} 
          }
        }
      private:
        void OSCarbitraryWaveform(OSCMessage& msg, int addressOffset) {debugPrint(msg,addressOffset);}
    };
    
    
    class AudioOSCMixer4 : public AudioMixer4, AudioOSCbase
    {
      public:
        AudioOSCMixer4(const char* _name) : AudioOSCbase(_name) {}
    
        void route(OSCMessage& msg, int addressOffset)
        {
          if (isMine(msg,addressOffset))
          {
            if (isTarget(msg,addressOffset,"/g*","if")) {gain(msg.getInt(0),msg.getFloat(1));} 
          }
        }
    };
    #endif // !defined(_AUDIOOSCBASE_H_)
    An OSC-capable class derived from class Audio<something> is always AudioOSC<something>. You'll note each instance needs to be given a name for routing purposes: for static instances it'd probably be the same as the variable name, but for dynamic instances there may be no variable name. If an OSC message arrives for the audio engine (I match "/teensy*/audio" in my code), you just pass it in with a call to AudioOSCbase::routeAll(msg,addressOffset) which runs down the linked list checking to see if it's for any valid instance and function. I believe you have to do that, because of the pattern capability.

    I haven't yet touched returning values; I believe they should be an OSC message, but what address to use is slightly unclear to me right now. I've also not tested the destructor, or dealt properly with any functions that need strings or arrays passed in, or use of bundles, or timing. The comments are nearly non-existent, and there's debug code everywhere. Lots left to do...

  5. #30
    Senior Member
    Join Date
    Apr 2021
    Location
    Cambridgeshire, UK
    Posts
    154
    Just been thinking - going to change it round so the derived classes all start OSCAudio<something>, so if other libraries spring up using a similar scheme (e.g. OSCMIDI, OSCdisplay...) they'll be found more easily by a human being!

  6. #31
    Senior Member
    Join Date
    Apr 2021
    Location
    Cambridgeshire, UK
    Posts
    154
    Created a github repo and pushed the rough and ready work in progress to it, in case anyone wants a look. See https://github.com/h4yn0nnym0u5e/OSCAudio

  7. #32
    Senior Member manicksan's Avatar
    Join Date
    Jun 2020
    Location
    Sweden
    Posts
    327
    Quote Originally Posted by h4yn0nnym0u5e View Post
    they'll be found more easily by a human being!
    yes that is true, but when are they needed to be found, the Arduino IDE don't officially support autocomplete
    but when using VSCODE or other IDE it's available.

    Good implementation that you have done,
    like the linked list

    but are not the "execution" order backwards (according to how new objects are added)
    that also make me believe that the execution order in the Audio Lib is also backwards
    (yes it is)
    that mean that the export order (from Tool) should be backwards

    I can see the logic that "newer" objects should be executed first and the "older" last.

    If you follow the signal flow then the data from a "generator" should be executed before going to a mixer
    so in when the loop starts for the very first time no source data is available, the other time around every data is available.


    Back to what you have done:
    I did some thinkin
    it could be nice if the OSC implementation could be available in the official Audio library objects
    and enabled by a compiler flag when used, therefore when not using OSC the implementation don't take extra memory.
    but that way makes it harder to maintain the AudioObject OSC implementation, as every file need to be updated.

    Also that mean that we don't need extra new class names to remember,
    but when using the tool that is not needed as the tool easly can export objects using the OSCAudio<something> namespace anyway,
    just by adding OSC in front of every Audio<something>

  8. #33
    Senior Member
    Join Date
    Apr 2021
    Location
    Cambridgeshire, UK
    Posts
    154
    Quote Originally Posted by manicksan View Post
    yes that is true, but when are they needed to be found, the Arduino IDE don't officially support autocomplete
    but when using VSCODE or other IDE it's available.

    Good implementation that you have done,
    like the linked list

    but are not the "execution" order backwards (according to how new objects are added)
    that also make me believe that the execution order in the Audio Lib is also backwards
    (yes it is)
    that mean that the export order (from Tool) should be backwards

    I can see the logic that "newer" objects should be executed first and the "older" last.

    If you follow the signal flow then the data from a "generator" should be executed before going to a mixer
    so in when the loop starts for the very first time no source data is available, the other time around every data is available.
    Thank you

    In the static library the AudioStream objects actually link themselves in in definition order (AudioStream.h, about line 136), so the execution order is probably reasonable. In my dynamic library I create the execution order links in patchcord order, as far as possible, since definition order is not necessarily useful.

    For OSCAudio message routing I don't think it matters much: it's done in foreground code as we have to poll every object anyway.

    Back to what you have done:
    I did some thinkin
    it could be nice if the OSC implementation could be available in the official Audio library objects
    and enabled by a compiler flag when used, therefore when not using OSC the implementation don't take extra memory.
    but that way makes it harder to maintain the AudioObject OSC implementation, as every file need to be updated.

    Also that mean that we don't need extra new class names to remember,
    but when using the tool that is not needed as the tool easly can export objects using the OSCAudio<something> namespace anyway,
    just by adding OSC in front of every Audio<something>
    That would be great, to do it in the GUI. Maybe an option button to switch export from non-OSC and OSC-capable; or an option to place an object of either type, and/or the ability to switch an already-placed object's type? Maybe show them in different colours? Having a mix will, as you say, improve memory use, and also the message routing efficiency.

    Not quite sure if using a compiler flag would be robust. I've done many quick hacks using something like #define AudioSynthWaveform OSCAudioSynthWaveform, and it usually bites me at some point! For now I'd prefer not to touch the Audio or AudioStream libraries, though if Paul decided to adopt OSCAudio then it would be a different matter. Much too early for that, though...

  9. #34
    Senior Member
    Join Date
    Apr 2021
    Location
    Cambridgeshire, UK
    Posts
    154
    @JayShoe, looking back at the original User Requirement Specification in #1, are we wandering a bit off-piste here? Can TouchMIDI use a serial port?

  10. #35
    Senior Member manicksan's Avatar
    Join Date
    Jun 2020
    Location
    Sweden
    Posts
    327
    by the looks of it @ https://github.com/benc-uk/touchmidi
    it uses the "Web MIDI API"
    So it could probably also implement "Web Serial API" as well
    but "Web Serial API" is so new (beta)
    that they have not yet thought of it yet

  11. #36
    Senior Member manicksan's Avatar
    Join Date
    Jun 2020
    Location
    Sweden
    Posts
    327
    by the way
    I try to use your lib but cannot get it to work
    I use Br@ay:s terminal and send the data in RAW format (a $ means hex format)
    Code:
    $C0/teensy1/audio/waveform1/f$00$00,f$00$00$43$dc$00$00$C0
    $C0/teensy1/audio/waveform1/b$00$00,i$00$00$00$00$00$00$C0
    and
    this is just the same data as I receive when doing:
    Code:
    OSCMessage msg2("/teensy1/audio/waveform1/f");
      msg2.add(440.0);
      HWSERIAL.beginPacket();
      msg2.send(HWSERIAL);
      HWSERIAL.endPacket();
      msg2.empty();
      HWSERIAL.println();
      OSCMessage msg3("/teensy1/audio/waveform1/b");
      msg3.add(0);
      HWSERIAL.beginPacket();
      msg3.send(HWSERIAL);
      HWSERIAL.endPacket();
      msg3.empty();
    and I have added debug code to the end of isTarget
    so that I can see when a target is not matched
    Code:
    void route(OSCMessage& msg, int addressOffset)
    {
      if (isMine(msg,addressOffset))
      {
    	if (isTarget(msg,addressOffset,"/am*","f")) {amplitude(msg.getFloat(0));} 
    	else if (isTarget(msg,addressOffset,"/ar*","bf")) {OSCarbitraryWaveform(msg,addressOffset+nameLen+1);} 
    	else if (isTarget(msg,addressOffset,"/b*","ffi")) {begin(msg.getFloat(0),msg.getFloat(1),msg.getInt(2));}         
    	else if (isTarget(msg,addressOffset,"/b*","i")) {begin(msg.getInt(0));}         
    	else if (isTarget(msg,addressOffset,"/f*","f")) {frequency(msg.getFloat(0));} 
    	else if (isTarget(msg,addressOffset,"/o*","f")) {offset(msg.getFloat(0));} 
    	else if (isTarget(msg,addressOffset,"/ph*","f")) {phase(msg.getFloat(0));} 
    	else if (isTarget(msg,addressOffset,"/pu*","f")) {pulseWidth(msg.getFloat(0));} 
    	else {
    		Serial.println("Cannot find target");
    	}
      }
      else {
    	  Serial.print("is not mine @");
    	  Serial.print(name);
      }
    }
    I just try to understand how the protocol works,
    to make it easier to debug when trying to implement it into the Tool

    the problem is that it always goin to the "Cannot find target" part.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •