Forum Rule: Always post complete source code & details to reproduce any issue!
Results 1 to 2 of 2

Thread: Project Advice (4 channel "wireless haptic feedback" via Teensy + Audio Boards)

  1. #1
    Junior Member kriista's Avatar
    Join Date
    Aug 2013
    Location
    Manchester, England
    Posts
    17

    Project Advice (4 channel "wireless haptic feedback" via Teensy + Audio Boards)

    Not sure if this is the best forum (or category for that matter) for this question, but I do know there’s lots of you that mess with this kind of thing a lot, so I wanted to ask for some advice.

    The short version is that I want to make a 4-voice polyphonic and multichannel synth to using a Teensy 3.2 + 2 audio boards that responds to USB-MIDI.

    The long(er) version is that this will be part of a revamping of a dynamic score system I worked on a few years back (dfscore) that I want to rebuild using haptic feedback instead of (primarily) screen-based interfacing.

    I spent ages looking for a good wireless haptic feedback system that could multicast and kept coming up short. While I was in Berlin over the summer I managed to try out a Basslet and really dug it. Quite powerful and really fast response time (ala ‘taptic engine’ type response speed) since it’s not an eccentric motor that’s spinning up and down. ‘Only’ goes up to 250 Hz, but that’s plenty of room to create distinct gestures with.



    So my plan is to use the Teensy as a hub/node for the system where each Basslet transmitter (each has its own little dongle) would be fed audio from one “voice” of the synth on the Teensy, while responding to MIDI notes on diff channels (or whatever). I’ll house all of that, with a tiny hub inside a small (likely 3D-printed) enclosure, so I can run the whole system off a single USB cable.



    Hardware-wise I've got it built and working, following the great instructions here and help from a friend.

    Where I'm at with it is trying to figure out the best way to speak "MIDI" to the system.

    Initially I was thinking of using a vanilla square wave oscillator going into an on/off "gate" envelope, and controlling the system with normal MIDI note on/off messages. This would definitely work, be simple to set up, and easily controllable from any system (computer, controller, etc...).

    I then thought it would be handy to have discrete control over the volume to be able to create more complex amplitude "gestures" where bits of of the haptic feedback faded in in intensity then had a strong single attack at the end. So I thought to maybe use "aftertouch" or a random CC to control the 'gain' of each voice.

    That led me to thinking that the same may be useful for frequency, which led me to think about incorporating "pitch bend", or another random CC to control the frequency (or freq offset) of each voice.

    (all of this is speculation at the moment, since I'll need to create and test the actual haptic gestures I make, but I want to have the options to explore here)

    So now I'm wondering if I'm better off just exposing "raw" control of frequency and amplitude via CC control, and manually creating the haptic gestures I want that way, rather than making a "normal" MIDI setup which I then extend with aftertouch/pitchbend.

    The latter would be the easiest for me to control (using Max/MSP), and easiest to conceive as well, but I wonder about MIDI throughput if I'm sending loads of MIDI data to 'draw' the pitch and amplitude envelopes for each gesture. Same goes for issues of timing etc...

    So yeah, any thoughts would be welcome here.

  2. #2
    Junior Member kriista's Avatar
    Join Date
    Aug 2013
    Location
    Manchester, England
    Posts
    17
    One sneaky little bump here before this descends into the forum void.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •