Project Advice (4 channel "wireless haptic feedback" via Teensy + Audio Boards)

Status
Not open for further replies.

kriista

Member
Project Advice (4 channel "wireless haptic feedback" via Teensy + Audio Boards)

Not sure if this is the best forum (or category for that matter) for this question, but I do know there’s lots of you that mess with this kind of thing a lot, so I wanted to ask for some advice.

The short version is that I want to make a 4-voice polyphonic and multichannel synth to using a Teensy 3.2 + 2 audio boards that responds to USB-MIDI.

The long(er) version is that this will be part of a revamping of a dynamic score system I worked on a few years back (dfscore) that I want to rebuild using haptic feedback instead of (primarily) screen-based interfacing.

I spent ages looking for a good wireless haptic feedback system that could multicast and kept coming up short. While I was in Berlin over the summer I managed to try out a Basslet and really dug it. Quite powerful and really fast response time (ala ‘taptic engine’ type response speed) since it’s not an eccentric motor that’s spinning up and down. ‘Only’ goes up to 250 Hz, but that’s plenty of room to create distinct gestures with.

fed89f42efd0fccb819035afb9d96fb0213f4cf7_2_1332x1000.JPG


So my plan is to use the Teensy as a hub/node for the system where each Basslet transmitter (each has its own little dongle) would be fed audio from one “voice” of the synth on the Teensy, while responding to MIDI notes on diff channels (or whatever). I’ll house all of that, with a tiny hub inside a small (likely 3D-printed) enclosure, so I can run the whole system off a single USB cable.

7f54d372de7553a8389f75d92da27831d4f85993_2_1332x1000.jpeg


Hardware-wise I've got it built and working, following the great instructions here and help from a friend.

Where I'm at with it is trying to figure out the best way to speak "MIDI" to the system.

Initially I was thinking of using a vanilla square wave oscillator going into an on/off "gate" envelope, and controlling the system with normal MIDI note on/off messages. This would definitely work, be simple to set up, and easily controllable from any system (computer, controller, etc...).

I then thought it would be handy to have discrete control over the volume to be able to create more complex amplitude "gestures" where bits of of the haptic feedback faded in in intensity then had a strong single attack at the end. So I thought to maybe use "aftertouch" or a random CC to control the 'gain' of each voice.

That led me to thinking that the same may be useful for frequency, which led me to think about incorporating "pitch bend", or another random CC to control the frequency (or freq offset) of each voice.

(all of this is speculation at the moment, since I'll need to create and test the actual haptic gestures I make, but I want to have the options to explore here)

So now I'm wondering if I'm better off just exposing "raw" control of frequency and amplitude via CC control, and manually creating the haptic gestures I want that way, rather than making a "normal" MIDI setup which I then extend with aftertouch/pitchbend.

The latter would be the easiest for me to control (using Max/MSP), and easiest to conceive as well, but I wonder about MIDI throughput if I'm sending loads of MIDI data to 'draw' the pitch and amplitude envelopes for each gesture. Same goes for issues of timing etc...

So yeah, any thoughts would be welcome here.
 
Status
Not open for further replies.
Back
Top